White House underscores responsibilities of the generative AI industry

Two-stranded support for AI comes with responsibilities.
9 May 2023

With great profit comes great responsibility…

The US government has announced a series of measures to address both the challenges, the opportunities and the concerns over artificial intelligence. In recent months, since the launch of the ChatGPT generative AI by Open AI (backed by Microsoft), the technology has been seen as a) the new wundertech, being added to everything, b) an easy tool for cybercriminals to use to ruin the world, c) a potential threat to the lives and existence of humanity, and d) an annoying data privacy challenge.

If this sounds like hyperbole, it’s worth remembering the open letter from Elon Musk, Steve Wozniak et al, begging that generative AI be paused at GPT-4 level. And the admittedly brief banning of ChatGPT in Italy while data privacy and security assurances were sought – and apparently received. And China’s clampdown on any generative AI tech that wasn’t developed within a strictly socialist training regime, etc, etc.

A two-stranded plan.

There’s little doubt that generative AI can help speed up and – with appropriate truth models for reference – both simplify and improve the world in which we live. But there has been enough noise and concern about the technology, and in particular its sudden rapid deployment and potential for confident error, to stir the White House into action.

In particular, there are two strands to the government’s proposed action plan on AI.

The first strand includes the introduction of policies – and, it is assumed, standards on things like data privacy – for the procurement and use of AI systems by federal agencies.

Those policies are expected to significantly skew the market within the US and possibly further afield in terms of which systems are “approved,” with those that are being able to leverage significant market force on the back of their “safe-listing” for use by federal organizations, or potentially becoming federal-only suppliers, locking high standards of security into the AI systems used by local and national government agencies.

That in turn is expected to lead to a consistent level of AI security and experience across the likes of government and agency websites, as well as at things like security checkpoints (including potentially border points and airport security).

The second strand of the plan is to allow the National Science Foundation to spend $140 million on promoting research and development in AI. The idea behind that research will be to create a handful of research centers, investigating how AI can be used to advance the nation as a whole in several areas, setting it to problems like climate change, agriculture, and public health.

The double-stranded plan aims to both “legitimize” AI to some extent through the research centers and the multiplicity of essentially government-funded applications for the technology, and to set rules around what the technology needs to be able to deliver in terms of security, truth, and genuine assistance if it’s to be allowed near government systems.

The fold-back on that is that when it is used by government systems, there will be at last some level of certainty and legitimacy conferred on the technology as a whole.

Meeting the superstars.

The double-stranded plan was announced as Vice President Kamala Harris met with high profile figures from the already rapidly-expanding AI industry, including the CEOs of Google, Microsoft, and ChatGPT-creator OpenAI. The meeting was to underscore the importance of ethical, responsible, believable AI.

As yet, no announcement has been made on how, for instance, the AI technology used for federal systems will apply truth models, in the absence of an objective truth in things like ChatGPT, GPT-4, Bard and other similar generative AI.

After the meeting, VP Harris said “Government, private companies, and others in society must tackle these challenges together. President Biden and I are committed to doing our part — including by advancing potential new regulations and supporting new legislation — so that everyone can safely benefit from technological innovations.”

White House Press Secretary Karine Jean-Pierre, meanwhile, described the discussions between the Vice President and the AI supremos as “honest” and “frank.” It’s worth noting that had the word “productive” been in any reasonable sense useable following the meeting, she would have used it to instil a sense of positivity in the audience.

Threats and responsibilities.

The meeting is understood to have dealt with a range of pressing issues arising from AI technology, including: AI-created deepfakes and misinformation, the likes of which could be used to sway opinion in democratic elections; potential job losses linked to the rise of automation and AI; the possibility of biased algorithmic decision-making; physical injuries or deaths in autonomous vehicles; and, naturally, the rise and rise of AI-powered malicious hackers and hack-sellers, the like of which are already operating on the dark web to “democratize” the hacking process.

Vice President Harris said generative AI companies had an “ethical, moral and legal responsibility to ensure the safety and security of their products,” and that they would be held accountable under existing US laws, while she was willing, if need be, to create new laws to meet the needs of the age.

President Biden, who apparently “stopped by for a surprise visit” – Washington code for “threw his weight behind his VP without officially taking the meeting” – is said to have “underscored that companies have a fundamental responsibility to make sure their products are safe and secure before they are deployed or made public.”

Reading the room, it feels as though the Biden-Harris administration read the AI industry a new-fangled version of the Riot Act, ensuring the companies understood that if they were going to make millions of dollars from this technology, and have it integrated into systems upto and including federal level, there was a burden of significant responsibility that would come with it.

Other plans.

The US is in no sense alone in looking for a way to deal with the rapidly evolving opportunities and dangers of generative AI. In fact, it could almost said to be late to the party – the EU has been in the process of getting an AI Act together since 2021, which would set up rules for the use and function of artificial intelligence within the bloc.

The UK, which could reasonably have been said to have other things on its To-Do List since 2016, but which still aims to become what it calls a “cyber-superpower,” has been holding a parliamentary inquiry into AI technology throughout 2023, examining both the risks and the opportunities of the technology, similarly trying to find a coherent path on the governance of AI, both existing and emergent.

While these ongoing inquiries and “discussions” take place, more and more AI is being deployed in an environment of relative freedom from the responsibilities that the likes of the White House are assessing.

The second half of 2023 is likely to see a shift in the focus of generative AI companies from the gold rush period of getting the technology out there and integrated into every conceivable system, to a more stabilizing period where the rules and responsibilities as they eventually emerge are met and demonstrated across the board – with a natural rate of fall-out in terms of companies that cannot or will not comply.