Generative AI a threat to human survival – CAIS

A pandemic, a nuclear weapon and a generative AI walk into a bar... Nothing looks quite the same ever again.
30 May 2023

• New warnings against the generative AI threat from experts.
• Potential to skew the 2024 election with AI deepfakes.
• The human challenge is to report use of deepfakes.

Generative AI is as big a threat to human survival and society as pandemics or nuclear war. That’s according to the Center for AI Safety in a new statement, which practically begs the powers-that-be to take action to reduce what it calls “extinction-level risk” from the new technology.

There have of course been calls for slowdowns, re-thinks, and a re-corking of the bottle that held the generative genie before now – academics and business leaders have warned that we don’t yet know enough about the technology to set it as free as we have done in a wide range of businesses, from which it’s unlikely we’ll be able to unpick it down the line.

Voices of concern.

Ironically enough, the open letter from the Future of Life Institute, which was the first such organization to call for a pause, was probably robbed of some substance by the involvement of Elon Musk in the call.

Despite Musk being involved in the original birth of OpenAI, he’s a divisive figure, and his time as CEO of Twitter has only deepened that quality, meaning there are now many people, even in the tech industry, who will have seen his involvement in the Future of Life letter as cynical and self-serving, and so ignored any validity in the warnings the letter contained.

When the so-called “godfather of AI,” Geoffrey Hinton, subsequently left Google, citing significant concerns over the development of the technology and its potentially human life-ending potential, the world took rather more notice, because he was a figure at the forefront of the research that has got us to where we are.

The disaster movie cliché.

Hinton, it should be noted, is a signatory of the new statement from the Center for AI Safety. As is Open AI CEO Sam Altman. And John Schulman, co-founder of OpenAI. As are both Kevin Scott, Chief Technology Officer at Microsoft, and Eric Horvitz, the company’s Chief Scientific Officer. And Lila Ibrahim, Chief Operations Officer at Google DeepMind…

We’re not about to turn this into a roll-call of the great, the good, and the extremely clever, but as with the Future of Life Institute, the Center for Safe AI’s statement is signed by emeritus professors, AI specialists, and active researchers from some of the finest academic institutes in the US, and the world.

And if there’s one tired cliché that can be relied upon in every science fiction B-movie out there, it’s that lots of clever scientists warn of the impending disaster at the start – and are ignored, with devastating, popcorn-chewing results for the next 90 minutes.

Pandemics, nuclear war, generative AI.

As if the Future of Life Institute letter, which openly talked about the potential of generative AI to lead us to a kind of personal extinction wasn’t bald and hysterical-sounding enough, the Center for Safe AI makes precisely zero bones about the scope of the problems it claims generative AI can lead us to.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

That’s the whole of the statement.

The tech and business communities have already identified a solid handful of risks inherent in the wholesale application of generative AI. In essence, it democratizes altogether too many processes and puts them in the hands of well-meaning idiots.

Or indeed, harm-meaning idiots.

Those processes can include developing prompt-based shell scripts and apps in a coding language you have no idea how to write – taking the expertise out of programming and coding.

They can include writing copy which can be persuasive, engaging, and yet objectively, factually wrong.

And they can include creating phishing and malware bots without much in the way of understanding of the technology involved.

Generative AI deepfakes – the death of truth?

There are also significant concerns about the data on which leading – already adopted – generative AI bots have been trained, and the data they collect and then on some level can own and use, as has been shown to be valid in the case of Samsung in early May, 2023.

The company’s error forced a ban on the use of ChatGPT by its staff, for fear of giving away any more proprietary code to the generative AI.

But one of the biggest concerns over generative AI as we head into the 2024 election season is the rise and rise of the technology in terms of creating convincing deepfake images, videos and even audio footage with AI voices.

There is already deepfake footage circulating on the internet of Governor Ron DeSantis, a challenger for the Republican Presidential nomination, merged with an episode of The Office, which seems to discredit the governor – on this occasion at least, unfairly.

Fake narratives.

There are two points to watch in developments like this.

Firstly, former President Trump, who is – depending on the outcome of several lawsuits – running to be President again in 2024, was both a candidate and a president entirely unperturbed by a lack of evidence to support his claims.

Witness the entirely false narrative of a stolen election in 2020, which led to the Capitol insurrection of January 6th, 2021, with threats made to prominent figures on both sides of the partisan divide, including Speaker Nancy Pelosi and Trump’s own Vice-President, Mike Pence. Just this week, judicial sentences were handed down to some insurrectionist leaders that will put them behind bars for 18 years.

Trump’s narrative from even before he won the White House the first time was that both Democrats, news media and “the Deep State” were peddling a narrative of what he called “fake news” – a label earned by anything other than the most fawning praise of his every move.

In a world where generative AI-based deepfakes are a widely available, increasingly cost-effective way of framing a narrative, it will be interesting to see what defence any news media has against the idea that they are using such technology to peddle an anti-Trump – and therefore in the eyes of many voters, an anti-American – narrative.

It will hardly surprise anyone that former President Trump himself has already been sharing the DeSantis deepfake without appropriate information that identifies it as an AI deepfake.

A question of responsibility.

Secondly though, beyond the Trump factor, the increasing availability of generative AI-based deepfakes and voicefakes threatens the very nature of “truth” in any political campaign.

As in coding, for instance, where you need experienced coders to be able to tell what is wrong with generative AI-written code and put it right, and as in copywriting, where disclaimers about copy being written by AI and fact-checked by human beings are popping up more and more, so there is a need, in a world where these technologies are increasingly commonplace, for news organizations – and political organizations and figures – to own their use of AI-generated fakes whenever they do it, so they can be distinguished from objective and fact-based reporting or video.

Unfortunately, if social media has taught us anything, it’s that facts stand up poorly to a need for people to feel right in their own confirmation biases.

So in the world of regularly available generative AI deepfakes, the idea that anyone will know what “objective truth” is becomes increasingly easy to water down – both across the political spectrum and across the world.

The AI election.

Everyone from conspiracy theorists (“Shock newly discovered footage proves we never went to the Moon!”) to political theorists on both sides of the aisle, to China, to Russia, will be able to use the technology to “prove” their version of reality, and all they need to do is not disclose that it’s an AI deepfake in order to make their audiences incensed against any opponent they choose.

When what you see automatically becomes “the truth,” who wags your dog?

With some news organizations already calling 2024 “the AI election,” the big question is whether concepts like democracy and truth can actually survive long into the AI deepfake era.

As is often the case when the potential danger of generative AI is discussed though, the technology itself is not the real threat. It is, to plagiarize the NRA, only the gun in the hand of the user.

The true test of the AI election – and the world as it looks with so much more generative AI underpinning everything we understand to be true (and on which we base our decisions) – is the honesty of intent of the people using the technology.

Will every news organization and every political campaign agree to signal the fakery of its content, every time it uses generative AI?

We are not, in the final analysis, a culture that has traditionally shown itself able to exercise such power responsibly over extended periods in recent years.