Generative AI fears “preposterously ridiculous” says godfather III

Two versus one - the godfathers of AI are at odds over fears that the technology presents a danger to human beings.
19 June 2023

Generative AI fears “ridiculous” – no Terminators today, thanks.

Getting your Trinity Audio player ready...

• Generative AI fears have grown in many quarters in 2023.
• Two of the three “Godfathers of AI” have come out against how it’s developed.
• Godfather 3, Prof LeCun, says generative AI fears are “ridiculous.”

Generative AI became a reality in late 2022, with OpenAI’s ChatGPT, backed by the many millions of dollars of Microsoft. Since then, big players in the tech industry, including Google, have raced to have their own large language models, on which generative AI have been built. In the first half of 2023, though, generative AI fears have grown.

Governments and legislatures have been fearful from the start – in a sense, having a degree of fear of new things that are wildly taken up by businesses and citizens alike is incumbent upon them by their duty of national care.

But generative AI fear has gone much, much further than that – including to some of those who have been most responsible for getting us to a point where it exists, and some of those who stand to become billionaires from its widespread adoption.

The letter.

The Future of Life Foundation was the first major body to express that generative AI fear, when it created an open letter asking for a pause to the development of generative AI with capabilities greater than GPT-4 (OpenAI’s second iteration of publicly available generative AI).

The Foundation grabbed headlines by having signatories like Elon Musk and Steve Wozniak. It almost demanded that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Subsequently, Italy banned ChatGPT from the country for a short time, over concerns about its data collection and use. Samsung, a major tech industry player, fell foul of the technology and gave away some of its own proprietary code – which technically, ChatGPT can now use.

The UK’s British Medical Journal expressed concerns that “AI tools such as ChatGPT raise is their ability to generate blocks of text that are so fluent and well-written that they are indistinguishable from content authored by human beings, which raises concerns of its use in fraud and plagiarism.”

And more recently, even Sam Altman, CEO of OpenAI, has admitted to Congress that the large-scale manipulation and deception are among generative AI’s biggest potential powers – and that the impact of generative AI on the world of work could amount to some jobs being “entirely automated away.”

“If this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”

Sam Altman calming generative AI fears in the Senate.

Sam Altman calming generative AI fears in the Senate. Source: WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP

And he validated some generative AI fears about the technology’s impact on the world of work, acknowledging that it may “entirely automate away some jobs.”

He has subsequently been keen to suggest ways in which effective regulation might be drafted at sufficient speed to keep up with the evolving technology – while also regarding the EU’s attempt to create an AI Act as overly proscriptive.

AI fears peak with the CAIS statement.

Since then, the Center for AI Safety (CAIS) has added fuel to the flames of generative AI fears, with its open statement that equated the potential dangers of the technology with the likes of pandemics and nuclear war.

The Generative AI fears of the CAIS are ridiculous according to Professor LeCun.

The Generative AI fears of the CAIS are ridiculous, according to Professor LeCun.

What gives such wild language any legitimacy is the level of experience of many of the high-level academics who signed the statement.

And more than anything else, generative AI fears have been legitimized by two things.

In 2018, long before generative AI was a reality with which the world had to conjure, three of the world’s top AI scientists won the Turing Award for their breakthroughs in AI technology and research. The three became known as “the Godfathers of AI.”

Dr Geoffrey Hinton was one.

Professor Yoshua Bengio was another.

Both have come out against the way the technology is developing across the first half of 2023. Hinton left a lucrative job at Google so that he could speak about the concerns he had with the technology.

Recently though, the third of the godfathers, Professor Yann LeCun, currently the chief AI officer at Meta, has come out clearly in the media, declaring that generative AI fears are “preposterously ridiculous.”

The third godfather was always going to go his own way…

Breaking ranks with his fellow godfathers, Professor LeCun acknowledges that computers are likely to become more intelligent than humans – in itself, enough to spark generative AI fears in many people – but also counsels that that won’t happen for many years, and that if we see it happening in an unsafe way, “you just don’t build it.”

The open-source community.

That’s an interestingly laissez faire position for Meta’s AI lead, given that Meta’s large language model was leaked to the open-source community, which is now creating its own faster, smaller, more focused versions of generative AI.

Professor LeCun claims that those who stoke generative AI fears do so “because they can’t imagine how it can be made safe.” He makes the point that turbo jet engines weren’t safe when they first exploded into the world – but that they were designed and designed with increasing safety until they reached a point where they were generally considered acceptably safe.

The same, he argues, will happen with generative AI.

“Will AI take over the world? No, this is a projection of human nature on machines” he told journalists.

The big "off" switch that makes generative AI fears ridiculous.

The big “off” switch that makes generative AI fears ridiculous, according to Professor LeCun.

Meta is currently pressing ahead with its own generative AI goals, which include developing objective-driven AI, which would remember, reason, plan and have “common sense” – none of which are features of current generative AI like ChatGPT. Professor LeCun said that even when Meta achieves this goal, it will “still depend on a data center somewhere, which will have an off-switch.”

A blockbuster in the making.

How the movie of generative AI plays out in the real world is something that only time and experience will ultimately tell us.

Those with significant practical experience of current generative AI argue that Professor LeCun is right, and that it will take a significant leap for the technology to “escape” the confines of human control.

On the other hand, when it comes to godfathers of AI, two have already switched allegiances, sharing a degree of regret that their life’s work has developed down a pathway they find troubling.

Place your bets on the survival of the human race?