ChatGPT: OpenAI CEO Sam Altman asks for AI to be regulated

Senators appeared to accept Altman's warnings that AI could "cause significant harm to the world" and his suggestion that a new agency could set rules.
18 May 2023

WASHINGTON, DC – MAY 16: Samuel Altman, CEO of OpenAI, appears for testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. Win McNamee/Getty Images/AFP (Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Unlike most congressional hearings involving tech industry leaders in recent years, the one involving the CEO of OpenAI – the company responsible for creating AI chatbot ChatGPT and image generator Dall-E 2–was far from contentious this week. During the three-hour-long hearing, Sam Altman had a friendly audience among the subcommittee members.

The CEO of OpenAI suggested to US lawmakers present that regulating artificial intelligence was essential. “If this technology goes wrong, it can go quite wrong,” Altman said in his first appearance before Congress on May 16.

Altman was the latest figure to erupt from Silicon Valley. But, unlike other CEOs, from Facebook’s Mark Zuckerberg to TikTok’s Shou Zi Chew, the OpenAI CEO was welcomed in a far more warm and earnest manner. 

Altman was open to speaking about the new technology’s possibilities – and pitfalls. However, much to the surprise of many tech observers, the senators present appeared to accept his warnings more willingly than not. 

The CEO of OpenAI acknowledged that AI could “cause significant harm to the world,” and accompanied his warning with a plea for some regulatory guardrails for this emerging technology. 

What led to the testimony of the OpenAI CEO?

WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, greets committee chairman Sen. Richard Blumenthal (D-CT) while arriving for testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. Win McNamee/Getty Images/AFP (Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

WASHINGTON, DC – MAY 16: Samuel Altman, CEO of OpenAI, greets committee chairman Sen. Richard Blumenthal (D-CT) while arriving for testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. Win McNamee/Getty Images/AFP (Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Altman was present at a Senate Judiciary subcommittee hearing with a simple but tricky question at the top of the agenda: what is AI? After all, to regulate technology, especially something as complex and fast-moving as AI, Congress must first understand it.

So having the CEO of OpenAI — the Microsoft-backed startup behind ChatGPT — offer some insights is the best shot lawmakers have. To top it off, it was the Senate’s first central hearing on AI. “As this technology advances, we understand people are anxious about how it could change our lives. We are, too,” OpenAI CEO said at the Senate hearing.

When South Carolina Republican Lindsey Graham compared AI technology to a nuclear reactor requiring a license and answers to a regulator, other senators echoed this.

“I would form a new agency that licenses any effort above a certain scale of capabilities — and can take that license away and ensure compliance with safety standards,” Altman said, according to a Bloomberg report; he added that such a US authority could shape the global consensus on AI regulation. 

To that, lawmakers present agreed that Congress moves too slowly to keep up with the pace of innovation, especially when it comes to AI, and developing rules for such a dynamic industry is best left to a new agency. 

Senator Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, said AI companies should be required to test their systems and disclose known risks before releasing them. Blumenthal also expressed concern about future AI systems destabilizing the job market. 

Altman was mainly in agreement, but with a more optimistic take on the future of work. What is certain is that the CEO of OpenAI himself appeared to be pressed on his own worst fear about the technology. Altman mostly avoided specifics, admitting that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

However, he later proposed that the new regulatory agency impose safeguards to block AI models that could “self-replicate and self-exfiltrate into the wild.” Altman even admitted that OpenAI is concerned with the impact the technology could have on elections. “This is not social media. This is different. So the response that we need is different.”

When they reached the point of discussing whether companies like OpenAI should halt the development of generative AI tools, the senators, like the hearing’s witnesses, said pausing innovation in the US would be unwise. At the same time, competitors like China pursue AI innovations. 

Altman did, however, make it clear that OpenAI has yet to make plans to push forward with the next iteration of its significant language model-based tools. “We are not currently training what will be GPT-5,” he said, adding there are no plans to start in the next six months.