Biden executive order brings us closer to regulations on AI

But does the executive order do enough to placate genuine fears over the technology?
1 November 2023

Has Presiden Biden just made the GenAI giants an offer they can’t refuse?

• Regulations on AI have been demanded since the second after ChatGPT arrived.
• The new Biden executive order gets us closer than we’ve ever been to a framework of AI law.
• It tackles eight major areas of concern with generative AI.

Since generative AI became a reality in October, 2022, thanks to OpenAI and its Microsoft-backed chatbot, ChatGPT, people, organizations and governments around the world have been calling for regulations on AI.

Potential dangers of the technology have run the gamut from the standard sci-fi “Algorithmic overlords will kill us all and/or destroy the planet,” through the significantly more likely “the technology will put whole armies of people out of work,” to the most likely of all, “It’s going to have exploitation of workforces, misogyny, bigotry and all the other unfairnesses of our society baked right in and normalized.”

There have been entirely legitimate concerns on the nature, quality and bias of the data on which large language models are trained, and equally legitimate worries that, given the startlingly rapid adoption of generative AI across the business community of the world, any regulations on AI would either come too late to be effective, or be too broad to do any good.

The Biden executive order comes into being, proposing sweeping additions to regulations on AI.

Will the executive order be effective?

The European Union was first out of the gate in terms of developing regulations on AI, and while its provisions in the EU AI Act are a brave stab at delivering guardrails on AI technology, they were begun in the era before generative AI, and so while they deal deal comprehensively with pre-generative technology, their regulations on generative AI are something of a blunt instrument.

While there’s no legal framework in the world where those who are regulated get to say how far the regulations should go, OpenAI’s Sam Altman felt free to add that he felt the European approach was “overregulation,” and went on a slightly desperate last minute European tour before the provisions of the Act were made public, to get them amended.

Without regulations on AI, we’re doooooomed!

Speaking of Altman, he’s previously spoken to the likes of Senate subcommittees about what he believes – or at least is eager to make it appear that he believes – are the dangers of the technology which has made his name and fortune, up to and including complete human extinction.

While it’s worth noting that in the wake of that testimony, he floated a security technology which could allegedly keep user data safe even from increasingly sophisticated generative AI, the like of which he was also keen to develop, Altman’s been back in the headlines just this week.

While he continues to acknowledge the feasibility of some of the wilder disaster-claims for generative AI (and the fact that we’re still in the very early days of the technology’s use, despite its wild breadth of uptake and application), Altman says – probably with the most open honesty of any of his recent statements – that there’s no putting the genie of AI back in its bottle, so he wants regulations around AI that make it safe from use by bad actors, without unfairly penalizing those who are trying to use the technology to advance humanity’s capabilities.

Are regulations on AI really needed?

“You crazy kids keep it down! If I have to come in there, there’ll be trouble…”

Which brings us to the Biden administration’s executive order.

While the White House had informal talks with some of the leading players in generative AI earlier in the year, and Democratic Senator Chuck Schumer has done some work on establishing initial guidelines on the technology, the new executive order is the most significant step the US government has so far taken towards a set of regulations on AI.

There are eight fundamental principles to the executive order:

  • Standards for safety and security
  • Protecting citizen privacy
  • Advancing equity and civil rights
  • Protecting consumers, patients and students
  • Supporting workers
  • Promoting innovation and competition
  • Advancing American leadership abroad
  • Ensuring responsible and effective government use of AI

The fundamental principles are both modern in terms of the technologies to which they apply, and distinctly Bidenesque – surfacing from under the radar with little by way of advanced warning, heady with pragmatism and drenched in American motherhood and apple pie.

But there’s no denying that they also touch on many of the main concerns that have been raised with the application and use of generative AI so far.

Sam Altman of OpenAI.

“First word? Starts with D? Democtratic oversight?!” Sam Altman of OpenAI.

Breaking down the Biden order.

On safety and security: the order requires makers of powerful AI systems to share their safety test results with the US government, and instructs the National Institute of Standards and Technology (NIST) to set rigorous standards for red-team testing of the safety of such systems before they’re allowed to be released for public use.

In addition, it provides for the establishment of an advanced cybersecurity program, to find and fix vulnerabilities in critical software, and establishes a National Security Memorandum to direct further actions on AI and security, so that the US military and intelligence community are bound to use AI safely, ethically, and effectively in their missions.

On protecting privacy: the order commands Congress to pass bipartisan data privacy legislation to protect all Americans and their data. Such legislation on AI should include priority federal support for the development of privacy preserving techniques.

Such AI legislation should also develop guidelines for federal agencies, so they can assess the effectiveness of available techniques to preserve data and personal privacy in the age of generative AI.

On equity and civil rights: The likelihood of generative AI engraining social prejudices into the “way things work” has been shown time and time again. The order demands that developers address algorithmic bias, and pledges the development of best practice in critical use cases like the criminal justice system.

On consumer, patient and student protection: the order commits the government to advancing the responsible use of AI in healthcare, and to provide a system to report any issues that arise from the use of AI in a healthcare setting.

It also commits the government to developing supporting resources to allow educators to safely deploy AI in the classroom.

On supporting workers: This is one of the biggest issues, because one of the biggest fears the public has is that AI will put them out of work. The order’s response might, to some, feel a little wishy-washy – it pledges to develop best practices and principles to “address” job displacement, labor standards, workplace equity, health, and safety, and data collection.

It also commits the government to producing a report on the potential impact of AI on workplaces, and any necessary mitigation strategies as we shift from a largely human workforce to a mixed human-system workforce.

On promoting innovation and competition: the order is on firmer, if no more original ground. It will use the National AI Research Resource—a tool to provide AI researchers and students with access to key AI resources and data—and expand available grants for AI research in areas of national and international significance, like healthcare and climate change.

It will also promote the growth of a ground-up AI ecosystem by giving small developers and entrepreneurs access to technical assistance and resources, and helping small businesses commercialize AI breakthroughs. The idea behind that is not only to spread the general public’s knowledge and acceptance of generative AI, but also to ensure the technology doesn’t become a bottleneck technology of the extremely rich and the mortifyingly powerful.

On advancing American leadership abroad: possibly the most thoroughly Biden element of the order, it pledges that the US government will work bilaterally and multilaterally with stakeholders abroad to advance the development of AI and its capabilities.

Unless of course those stakeholders are Russian, Chinese, or presumably, given the latest state of the newest pre-global conflict in the world, Palestinian.

And on ensuring responsible and effective government use of AI: the order is on stronger ethical footing – it provides for the rapid access by agencies of appropriate AI technology, the development of appropriate agency guidance for the use of that technology, and the swift hiring in of expertise in such technology, so that the US government and its agencies can be as clued-up as they need to be in 2024 and beyond.

Of all the attempts so far to develop wide-ranging and effective regulations on AI, the Biden executive order is by far the most comprehensive.

How much of the order sees the long-term light of day, is taken up as a set of guiding principles internationally, or at this point even survives the 2024 presidential election, remains to be seen.