Second “Godfather of generative AI” voices concern about the technology

Has anyone in the tech industry read Frankenstein? Creator turns on their creation? Ringing any bells yet?
31 May 2023

When the people who influenced its creation raise concerns…

• Second “Godfather of generative AI” expresses doubts and fears.
• Calls for large AI builders to be monitored by government.
• Especially hopes generative AI will not be used in a military capacity.

Worldwide, there are three central figures who have been granted the title – in the popular imagination, at least – of “Godfather of generative AI.” As of today, two of them have subsequently warned of the risks of the technology.

Professor Yoshua Bengio of the Department of Computer Science and Operations Research at the Université de Montréal (and scientific director of the Montreal Institute for Learning Algorithms (MILA)) has announced publicly that had he realized the sheer speed and ubiquity with which generative AI would be adopted, he would have “prioritized safety over usefulness.”

Professor Bengio is a signatory of both the Future of Life Institute open letter – the significance of which was rather overshadowed by media focus on Elon Musk’s involvement in it – and the statement yesterday by the Center for AI Safety, which baldly claimed that generative AI was in the same threat-stratum for humanity as nuclear war and pandemic plagues, and could lead to extinction-level harm for humanity.

The AI election and the military dimension.

While no-one is directly linking the dissolution of facts and truth with that extinction-level harm, it’s also true that analysts are already calling the 2024 US election “the AI election,” and the increasing use of AI-powered deepfake videos and voices threatens to eradicate any notion of anything actually, definitely happening or not happening anywhere – favoring candidates who push a “fake news” agenda and/or those who describe events that make it to the mainstream news networks as “false flag” events.

While he’s a signatory of both recent calls to curb the development – or at least the deployment – of generative AI (which is significantly more powerful, and orders of magnitude more unpredictable, than pre-ChatGPT AI), Professor Bengio singled out the idea of restricting militaries from generative AI technology as esecially necessary.

That of course would require significant international co-operation – and with the world increasingly re-polarizing into blocs, it would for instance also rely on the success of projects like the US’ attempts to shut out China and Russia from military-grade semiconductors.

So it’s fair to say that the idea, while it might seem to make ethical sense not to upgrade the world’s military machinery with the capacity to learn from previous results, is extremely unlikely to ever be realized while the arms race principle remains in force.

After all, the argument goes, if “we” don’t include AI in our military arsenal, and “they” do, simply by being more technologically advanced, “they” win.

Godfather II.

Professor Bengio follows the lead of Dr Geoffrey Hinton, his fellow “Godfather of AI,” who recently quit his position working for Google on its generative AI project, expressing regret at the way his life’s work had evolved, and the potential threat the technology clearly poses to human life – or at least, human life as we know it.

Dr Hinton also signed the Center for AI Safety’s statement.

Professor Bengio elaborated on his anti-military generative AI stance, re-iterating a common tech industry concern that the technology democratizes potentially too much, taking skill and understanding out of an equation, and giving previously rarefied abilities to those who can use them for ill.

He said it didn’t matter whether it was militaries, traditionally identified “bad actors” or even particularly psychotic individuals – “it’s easy to program these AI systems to ask them to do something very bad. This could be very dangerous.”

The dark side of progress.

It’s hard to deny the logic of these concerns – leading security teams have already commented on the importance of putting these tools principally in the hands of those who have experience and know what they’re doing.

Democratizing coding and app-writing, for instance, means that neither the human nor the AI necessarily knows what’s wrong with the code they’ve written – or how to fix it.

It’s also true that Bots-as-a-Service is already a reality on the dark web, and that sales of both malware bots and phishing bots are already taking place (with the bots significantly increasing the sophistication available to phishing scammers almost overnight).

Professor Bengio added the caveat that is at the forefront of a lot of the recent calls to slow down the development of faster and smarter generative AI. “If they’re smarter than us, then it’s hard for us to stop these systems or to prevent damage.”

That’s a special concern now that generative foundation models are available out in the open-source world. Because while the likes of Microsoft-backed OpenAI and Google have done incredible work in getting ChatGPT, GPT-4 and Bard out onto the marketplace – and they’ve been snapped up by every business that can possibly use them – the rates of development and progress have skyrocketed in the hands of the open-source community, while compute-power and cost have plummeted.

While they’re probably not there yet, in the hands of the open-source community, it’s feasible that generative AI will be smarter than human beings – and that they’ll reach that point in the relatively wild environment of open-source coding, and significantly sooner than the tech giants are ready for.

So there’s definitely meat on the bones of Professor Bengio’s concerns – and arguably, if anyone would know the dangers involved in the rapid development and deployment of the technology, it’s these two men.

The complexity of regulation.

There has been movement recently towards some sort of regulatory framework, as at least partially supported by OpenAI’s Sam Altman.

He recently told the US Senate that there really should be a regulatory framework around the development of more and more sophisticated generative AI models – though notably, he did so after the revelation that open-source coders had been developing faster, more bespoke, compute-lite, cheaper versions of generative AI than the multimillion dollar tech giant-backed front runners in the industry.

He then went on to describe the EU’s planned AI Act as an example of “over-regulation,” and vigorously campaigned to get it watered down though, so quite where he believes the line on regulating the industry should be drawn is less than clear.

The White House also recently got the leading players in generative AI together for a genteel pistol-whipping, asserting that the industry needs to take responsibility for the technology if it’s going to make billions of dollars out of it.

That’s all very well, but again, as yet, does nothing to rein in any rapid-fire development and progress by open-source coders.

It’s not the apocalypse – but you might be able to see it from here.

Professor Bengio admitted to feeling “lost” in terms of the direction of his life’s work given recent events, but also acknowledged the responsibility of leading developers to engage with the questions currently dogging the generative AI industry, and “encourage others to think with you.”

The third of the acknowledged “Godfathers of AI,” Professor Yann LeCun, is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.

Given that it was a version of Meta’s LLaMA that was leaked to the open-source community to start their race against the problems of “traditional” generative AI, you’d be forgiven for thinking that Professor LeCun perhaps had the most invested in putting some kind of brake on the arresting pace of generative AI development.

But of the three Godfathers, he remains the only one not to voice concerns about the direction in which the technology is evolving.

In fact, he’s on record as saying that apocalyptic warnings about the impact of the technology are “overblown.”

It’s true that recent warnings have tended towards the apocalyptic – you don’t release headlines claiming extinction-level harm-potential without sounding at least a little hysterical.

But there are two things that it’s important to remember. On the one hand, these are experts in their field – and on simple playground math, two beats one any day of the week.

And on the other, there’s a fairly large gray area between “It’s the apocalypse! The machines will kill us!” and “This technology is entirely harmless and fit for human use.”

Can the big players be monitored?

Even the legendarily divisive Elon Musk – who was in on the ground level at OpenAI’s conception – recently acknowledged that while the physical likelihood of generative AI eradicating humanity was “small – close to zero, but not impossible,” we do not need to be within that small window of devastation for generative AI to make life significantly more miserable than it needs to be for humanity.

“I don’t think AI will try to destroy humanity, but it might put us under strict controls,” he said at an event hosted by The Wall Street Journal.

Science fiction fans have appreciated the weight of scientific ethical dilemmas since at least the mid-Seventies…

There’s a significant school of thought that maintains that making life devastatingly difficult for large swathes of humanity – as well as ultimately wiping out the species – is a job for which only humans themselves are properly qualified.

Finally, Professor Bengio said that all companies building powerful AI products should be registered, so that governments could track their development of the technology.

That’s a good idea in theory, but as the open-source community now has access to foundation models for generative AI, and is developing the technology significantly faster than the larger players, it may only address the most visible part of the market.

That said, the open-source community is driving the delivery of smaller, more bespoke AI models, so tracking only the bigger players might yet allow for the restriction of the potentially massively dangerous hyper-models that, for instance, might be used in military contexts.

The question is whether or not any – or all – governments – will have the teeth to implement such restrictions.