EU moves closer to generative AI regulation

There's still some way to go - but could the EU's AI Act be flexible enough to act as guardrails for generative AI?
16 June 2023

MEPs voted on the draft wording of the EU AI Act. Source: FREDERICK FLORIN / AFP

Getting your Trinity Audio player ready...

• The EU has voted to accept draft language on generative AI regulation.
• The EU AI Act straddles the pre-and-post-generative AI eras.
• Concessions and exemptions exist for SMEs.

The EU has gained significant plaudits in recent months by being the first jurisdiction to come anywhere close to having regulation in place to govern the use and application of generative AI technology.

EuroParl UK on Twitter.

The news of the EU vote was shared widely on social media.

And this week the EU AI Act took significant steps towards becoming law across the EU jurisdiction. The initial text of the draft legislation that could eventually become the fully-fledged EU AI Act was approved by the EU’s main legislative branch.

But this is not evidence of the EU being either prescient or swift to act – anyone with significant experience of the EU’s legislative processes knows it’s as swift as a housebrick with a hernia.

It’s only as far ahead as it is because when it started thinking about AI, it wasn’t in any sense contemplating a world that had to contend with the complexities of generative AI.

A gumbo of tech concerns.

That’s why the draft language of the regulation that was approved this week lump a lot of seemingly disparate areas of “AI” technology together, including AI-enhanced biometric surveillance, emotion recognition and predictive policing, alongside generative AI like ChatGPT.

And where generative AI is mentioned, it is mentioned in relatively broad – but distinct and important – strokes, such as the high-risk status of systems used to influence voters in elections.

The EU voted to push for generative AI regulation.

How will the eventual EU AI Act look? It’s still too early to tell.

Elements of the draft regulation that declare that generative AI systems must disclose that AI-generated content is AI-generated content, rather than individually human-created content, may not only make lawyers everywhere rub their hands, but might also be incredibly complex to implement in real terms.

As more and more companies across the world and across the business sphere implement some version of generative AI in their back-ends or subsystems to smooth out business processes, there may be further complication in adhering to that element of the regulation – assuming it makes it all the way from the draft language to the statute books.

What AI would be forbidden from doing.

If the draft regulation becomes essentially the body of the EU AI Act, there are several distinct areas of activity from which AI would be effectively banned. These areas of activity would be those judged to carry “an unacceptable level of risk to people’s safety,” including areas that MEPs judge to be intrusive or discriminatory.

The list of those areas is telling in terms of the relatively long gestation period of even the draft language of he regulation, with generative AI specifically barely featuring.

They include:

  • “Real-time” remote biometric identification systems in public spaces;
  • “Post” remote biometric identification systems – for everyone except law enforcement agencies prosecuting serious crimes, and with judicial authorization;
  • Biometric categorization systems using sensitive characteristics including gender, race, ethnicity, citizenship status, religion, political orientation;
  • Predictive policing systems (including systems based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

The list – while it represents significant progress in terms of technology regulation and guidance – shows the nature of the fears that were most prevalent when the EU AI Act was first being drawn up in 2018-19.

High-risk AI.

The draft regulation language does of course include generative AI, but not within those initial concerns.

It goes on to define “high-risk AI” – those that “pose significant harm to people’s health, safety, fundamental rights or the environment” as including systems used to influence voters and the outcome of elections, and, in what may come as a blow to the likes of Meta, recommender systems used by social media platforms with over 45 million users.

We may as yet be doing Meta a disservice, but it’s highly plausible the company may organize a resistance to that particular addition to the language of the regulation between now and it becoming law.

And when it comes to specifically dealing with generative AI in the sense with which everybody is already familiar with it, the draft regulation language does cut through some of the knots of circular thinking and speculation which, for instance, US legislators have yet to fully do ahead of coming up with their own generative AI regulations.

Providers of foundation models will have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law), and register their models in the EU database before their release would be allowed on the EU market.

Rules without – as yet – much meat on their bones.

Generative AI systems based on such models, like ChatGPT, Bard, and others, would have to comply with transparency requirements. That means they would not only have to clearly state what content was generated by generative AI, they would also have to “help distinguish deepfake images from ‘real’ ones.”

The draft language is tellingly lite on information as to how it believes generative AI systems would be able to help do that. At this stage, it’s more a decree than it is a formula for action.

There are strong demands in the draft regulation – “Make it so!” – without too much detail on how to get it done.

Generative AI would also have to ensure safeguards against generating illegal content – again, more of a decree than a recipe at this stage – and, in a relatively obscure assertion that may be what led OpenAI’s CEO Sam Altman to describe the draft language as “over-regulation,” companies developing generative AI based on large language models would be required to provide detailed summaries of the copyrighted data used for their training – and it would have to be publicly available.

Sam Altman in Paris recently, aiming to water down the wording of the EU's generative AI regulation.

Sam Altman in Paris recently, aiming to water down the wording of the EU AI Act. Source: JOEL SAGET / AFP

Notes of hope.

While some of these elements may need significant refining and expansion before they can constitute any effective regulation around generative AI, there were notes of hope in a sequence of compromises, agreed through committees.

There had been significant concerns that while the multi-million-dollar companies behind the leading generative AI might be able to weather the storm of the regulation, SMEs and other smaller – or smaller-budgeted – organizations might not be able to survive the rigors of compliance and the potentially destructive costs of inadvertent failure to abide by the rules.

The concessions and compromises included clauses aimed at boosting AI innovation and the growth of an SME culture of generative AI application and use. That means there are exemptions in the draft regulation language that would protect SMEs, non-profit organizations, and small free software projects up to the size of micro-enterprises.

Honing the blade of law.

There remains a lot of work to do to turn the current draft of the EU AI Act into a workable set of regulations – and there may well be significant lobbying to get through along the way.

But the vote to approve at least the draft wording puts the EU significantly ahead of the game when it comes to working out ways to deal responsibly with a technological chimera that, for instance, the US has yet to seriously grapple with.