Does Italian ban signal substantial problems for ChatGPT?

What happens to user data with ChatGPT? Italian regulators want to know.
3 April 2023

ChatGPT does amazing tricks – but at what data cost?

Just weeks ago, ChatGPT was the brand-new must-have technology. Launched by OpenAI and backed by Microsoft, it was the first of a new generation of generative AI that would be added to everything and reshape our paradigm of what both AI, chatbots, and to some extent, search engines themselves could do.

It was suddenly so vital to have something similar that Google, caught napping, instituted a “code red” to push its own version, Bard, out into the world without losing too much time, and leaving Microsoft the field and the important initial impetus on the transformative technology.

The downside of ChatGPT?

But, like most transformative technologies, ChatGPT and generative AI aroused a full spectrum of emotions. To the world’s general public, it represented a move towards science fictional levels of technology, and they took it to their hearts as a new normal without necessarily understanding its limits and scope.

In the tech industry, there was very little quibbling about the achievement that ChatGPT represented – or indeed, the elegance behind its building. But even OpenAI was at significant pains to point out that it wasn’t the be-all and end-all of generative AI the public thought it was. It could be persuasively wrong with a high confidence level, it lacked a source of objective truth, and as such, while it could do some things extremely well, it wasn’t necessarily “ready” yet for all the marvellous things it might, eventually, be able to do.

Releasing GPT-4 so soon after ChatGPT was potentially a miss-step, because while the later version advanced the capabilities of ChatGPT, recognising visual data and famously being “able to explain why a meme was funny,” its training had ended in August 2022, so answers it gave to prompts about anything more recent were likely to be wrong by omission.

Partly as a result of the pace of geopolitics, art and culture since at least 2016, the world feels like it moves immensely fast these days, so the GPT-4 failing actually highlighted what might in future be a fundamental issue with generative AI – how to make it up-to-date enough to be usefully relevant, while still being robust enough to be safe.

Calls for a pause.

The questions around ChatGPT and generative AI continued to build until in late March, 2023, a group of tech thought leaders, including the likes of Elon Musk and Steve Wozniak, as well as professors from MIT, Oxford, Boston, New York and others, called for a halt on AI training above the GPT-4 level of sophistication for at least six months. They claim that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”

The calls for a pause came through the “Future of Life Institute,” which describes its agenda as “to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”

It would be worth asking what these luminaries think a six-month pause in such developments would actually do to steer the development of human-competitive AI away from being an “extreme, large-scale risk” – though it remains possible that it would allow other developers to get up to GPT-4 speed, thereby negating any commercial advantage held by OpenAI and Microsoft.

Data concerns.

Now though, a new front in the AI scepticism wars has opened up. Italy has imposed a temporary ban on ChatGPT, effective immediately – but not to save the world from the potential threat of human-competitive AI.

Instead, Italian regulators are investigating how OpenAI collects and uses data from those who use ChatGPT.

In particular, the regulators claim to be worried about the lack of any age bar on ChatGPT, meaning, for instance, that children using the generative AI might conceivably get responses that are entirely age-inappropriate.

There is of course a precedent for this kind of concern – the internet itself had to shift its model to allow the installation of parental surfing controls back in the days before children were introduced to it at or before the age they learned their alphabet.

ChatGPT is theoretically supposed to be for users over the age of 13, but such disclaimers have rarely worked throughout history – raise your hand if you never watched a movie or read a book that classification authorities said you weren’t officially “ready for” by virtue of age appropriateness.

For those users over 13, the regulators also expressed concern. “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” they said.

The ticking clock.

OpenAI has subsequently been blocked from processing data from Italian users, apparently “until it respects the privacy regulation.”

The company has just 20 days to advise the regulators of the measures it will take to make sure it operates within Italy’s rules, or face a penalty of upto €20 million ($21.8 million), or up to 4% of its annual global turnover.

Why is that important? Because what goes for Italy should probably go for other jurisdictions, too. After all, the US government is currently getting increasingly hot under the collar about the way TikTok collects US user data. Will the US, the EU and other jurisdictions follow suit on ChatGPT and its data collection practices?

If so, the Future of Life Institute might get its six-month delay, practically by default, as OpenAI faces data collection and processing challenges from jurisdiction after jurisdiction.

A real impact?

Does this signal an early end for ChatGPT and generative AI?

Hardly. There is no escaping the degree to which the technology advances the state of the art, for all it still needs significant work and a source of objective truth before it can do the things the wider public believes it can already do.

But it does signify the often-told tale of brilliant technology having to operate within a real-world framework of laws, concerns, and conventions (paging self-driving cars and blockchain…). ChatGPT and generative AI is in a very real sense too useful to die – but between the fear of scientists and tech leaders and the need to abide by data collection and privacy rules, it might well be a technology that’s forced to take a breath before it advances very much further.