OpenAI CEO backs down over European AI Act

OpenAI no longer thinking of leaving the EU - whatever the AI Act eventually looks like.
26 May 2023

Sam Altman in Paris on Friday, May 26th, concluding his “European tour.” Source: JOEL SAGET/AFP

Sam Altman, CEO of OpenAI, the Microsoft-funded creator of the ChatGPT and GPT-4 generative AIs, has backtracked on comments that the company would stop operating in Europe if the EU’s Act to regulate generative AI technology is too strict.

Initially, Altman flew a weather balloon of public opinion over the planned European AI Act, which will be the first serious attempt to fold restrictions and regulations on generative AI by any national (or in this case, international) power into law.

The regulated against regulation.

He voiced concerns over the extent of the proposed Act, saying on Wednesday this week that “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it.”

Unimpressed by this language, European lawmakers were quick to correct Mr. Altman’s assertions, saying the draft Bill was not up for alteration, and one Romanian Member of the European Parliament, Dragos Tudorache, confirmed that he did “not see any dilution happening any time soon.”

His initial comments have been characterized by some observers as saber-rattling to water down the Bill, on the basis that generative AI has already been widely adopted by companies in almost every industry imaginable, and so could be seen as too important a development to allow to fail.

If that were the case, Altman would have significantly miscalculated on two fronts – firstly of course, while it has by far the greatest name recognition (despite having by far the clunkiest name), ChatGPT is in no sense the only game in town as far as European countries and businesses are concerned, and has not existed long enough to generate any particularly strong brand loyalty among its user-base.

Whether corporate customers went to Google’s Bard, any of the other giant players, or went with more bespoke, open-source-based solutions, the sudden absence of ChatGPT from the European market would be less a catastrophe, and more the removal of an apex predator from the generative AI food chain, a gap in the market that competitors would be eager to fill.

And secondly, the EU has a fearsome reputation of calling companies – and even countries – to account, rather than bending to their subtle hints that it needs to change the way it does things or the companies will walk away.

Take a look at Meta. Take a look at Brexit.

Just last week, the Irish Data Protection Commission fined Meta over $1.3bn for data privacy infractions. The threat of a walk away from OpenAI would barely raise a Gallic shrug.

Within two days of his initial comments, Altman released a Tweet about having had “a very productive week of conversations in Europe about how to best regulate AI.” He added that OpenAI was “excited to continue to operate here and of course has no plans to leave.”

What is indicated about this somewhat farcical drama is an interesting duality of approach. Just weeks ago, Altman spoke in front of the US Congress, and agreed that generative AIs like ChatGPT and GPT-4 – and both their competitors and their successors – would benefit from some regulation.

He even shared insights into how such regulation might work, given the rapid pace of generative AI development and the legendarily glacial speed of the US legislative process – a real discrepancy, which could, as it stands, see any regulation become meaningless by the time it was ratified, given the advancement of the regulated technologies in the intervening period.

But the idea of the European AI Act being “over-regulation” suggests the notion that those who are being regulated get to say exactly how regulated they are prepared to be. This is in a very real sense not how regulation is supposed to work.

Cynics might argue that in a world drenched in money and power, it is how it actually works – after all, Meta will gladly pay as little of its recent mega-fine as possible, rather than amend its core business model, which would lose it significantly more money.

Development of the Bill.

But in a market where it is by no means the only available player, OpenAI may have overestimated its importance in claiming the AI Bill is over-regulation. Significantly, Google’s chief executive, Sundar Pichai, was in Europe at the same time as Altman, and likely with similar motives – to steer the language of the Bill’s draft.

Ironically perhaps, Google would probably stand a better chance of moving the EU lawmakers, given its much longer standing in the European business community and its broader suite of products and services, giving it a much fuller toolbox of influence-levers than OpenAI has, even with backing from Microsoft.

But as Dutch MEP Kim van Sparrentak, who has worked on the drafting of the Bill, noted drily after Altman’s climbdown, “Voluntary codes of conduct are not the European way. I hope we… will ensure these companies have to follow clear obligations on transparency, security and environmental standards.”

The European Bill has been in development for some time – and the only reason it’s anywhere near ready now (with ratification still to be gone through before it likely becomes law in 2025) is that it was a Bill that started off with a much narrower remit, applying to “high-risk” uses of AI, as in medical devices.

Its scope was only broadened to include generative AI in late 2022, precisely because of the launch of ChatGPT.

The thorny aspects.

The Bill as it stands would make it incumbent on any company making foundation models to identify the risks inherent in those models, and try to minimize those risks before the models were released.

Where Altman might find support for his “over-regulation” stance is in the fact that the Act would also make the model-makers – OpenAI, Google, Alibaba, Meta, et al – partly responsible for how their generative AI systems were used – even in cases where the makers had zero control over the applications to which their products were put.

So, for instance, in the case where open-source coders recently got their hands on the foundation model for Meta’s LLaMA, under the Bill as it stands, Meta could potentially stand partly responsible for any European versions created and distributed by third parties.

The naked data.

It’s likely though that the thing dripping ice water down the spines of Altman and Pinchai is the provision in the Bill that would make generative AI companies publish summaries of the data used to train their models.

Google voluntarily did something approaching that level of openness with its Bard generative AI, and the results were… interesting, suggesting that non-factual, inaccurate, and potentially PII data may have formed part of the training data.

OpenAI has yet to reveal the scope and nature of its training data publicly.

It’s unclear exactly what “discussions” Altman had within the 48 hours between toying with the idea of leaving Europe altogether and confidently assuring Twitter that there were no plans to do so.

But, for now at least, the tantrum in a teacup appears to be over. What happens en route to the Bill becoming the EU AI Act in 2025 remains to be seen.