Experts are urging an “AI pause” for six months following OpenAI’s GPT-4.

Elon Musk and other industry executives call for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.
30 March 2023

GPT-4: Experts are urging an “AI pause” for six months following OpenAI’s GPT-4. Source: Shutterstock

When OpenAI finally unveiled GPT-4, the next-generation large language model rumored to have been in development for much of last year, the world was not entirely over GPT-3.5-powered ChatGPT. After all, the AI chatbot had only been around for three months before its bigger, more powerful successor was introduced. 

For context, GPT-4 is a multimodal large language model, meaning it can respond to text and images. Give it a photo of the ingredients you have in your kitchen and ask what you could make, and GPT-4 will try to develop recipes that use the pictured ingredients. It’s also great at explaining jokes, OpenAI’s chief scientist, Ilya Sutskever, told MIT Technology Review: “If you show it a meme, it can tell you why it’s funny.”

However, not everyone–including industry executives like Elon Musk and other AI experts–is happy with the progress of the large language model. They are urging a six-month pause in developing systems more powerful than the fourth iteration of Microsoft-backed OpenAI’s GPT (Generative Pre-trained Transformer) program.

What is their issue with GPT-4?

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter issued by the Future of Life Institute states. According to the European Union’s transparency register, the non-profit is primarily funded by the Musk Foundation, the London-based group Founders Pledge, and Silicon Valley Community Foundation.


The letter is timely since ChatGPT and other AI chatbots have been attracting US lawmakers’ attention, with questions about their impact on national security and education. Even the EU police force Europol warned earlier this week about the potential misuse of the system in phishing attempts, disinformation, and cybercrime.

Meanwhile, the UK government has unveiled proposals for an “adaptable” regulatory framework around AI. “AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI, and his car maker Tesla uses AI for its autopilot system, making his signature of the letter controversial. Musk also believes it is right to seek a regulatory authority to ensure that the development of AI serves the public interest.

At the time of writing, 1,125 technology leaders and researchers have urged AI labs to pause the development of the most advanced systems, warning that AI tools present “profound risks to society and humanity.” Other signatories include:

  • Steve Wozniak, a co-founder of Apple.
  • Stability AI chief executive Emad Mostaque.
  • Researchers at Alphabet-owned DeepMind.
  • AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a research pioneer in the field.

The letter also claims that developers of AI, including those creating chatbots, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” Since OpenAI released ChatGPT, there has been a push to develop more powerful AI chatbots, eventually leading to a race that could determine the industry’s next leaders.

So far, significant AI chatbots announced by Big Tech alone comprise Microsoft’s Bing and Google’s Bard, followed by a series of other smaller tech companies. Most AI chatbots unveiled can perform humanlike conversations, create essays on various topics, and perform more complex tasks, like writing computer code.

Those tools have unfortunately been criticized for getting details wrong, with tendencies to spread misinformation. Therefore, the open letter calls for a pause in developing AI systems more powerful than GPT-4. The letter said that the pause is so “shared safety protocols” for AI systems can be formed. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.