Expert leaves Google, warning of the dangers of generative AI

Most heavyweight AI expert yet warns of the way it's going.
2 May 2023

We’re creating something different to ourselves, says Dr Hinton.

Dr Geoffrey Hinton, a leading figure in the development of AI as we currently understand it, has resigned from Google, where he was famous for his work on deep learning and neural networks – systems which, it’s widely accepted, have led us to developments like ChatGPT and GPT-4. He now says he regrets his work, and has grave concerns about the potential consequences of generative AI.

When someone has been a leading figure in their field, and then gives us warnings about it, two things have an opportunity to happen. On the one hand, the world can rock back on its heels and take a pause in its often headlong flight into a technological new dawn. On the other, it can dismiss the concerns of the thought leader and head on to whatever may come.

Genies and bottles.

The likelihood is that there has been too much money invested in developing generative AI systems – and too much immediate public take-up of the technology and its potential – to necessarily put the generative AI genie back in its bottle. But Dr Hinton’s concerns come as the latest, and perhaps most weighty, argument against running too fast into a technological world which has not yet been adequately considered.

It is of course true that any revolutionary technology will find uses in a whole range of industries for which its creators will not have predictively bargained – that is, after all, the hallmark of a revolution.

But there have already been a host of ways in which generative AI has given both a public fed on decades of science fiction based on the rise of AI, and longstanding scientists, technologists, and science-based businesspeople a whole host of nightmares.

Some of those nightmares are already bearing fruit in the real world. At a recent summit by Check Point, a multinational cyber-resilience specialist, leading strategists showed evidence of hackers using ChatGPT to put instant cybercrime kits into the hands of first-time users with no particular coding skills of their own – essentially democratizing the likes of insider threat.

And while it’s true that the Check Point specialists said the malware that could be created with ChatGPT right now was neither innovative nor perfect, and would usually take some checking by people who knew what they were doing to refine at the end of the build, they added that in terms of phishing as a cyberthreat, the techniques for developing practically perfect phishing emails through ChatGPT were already there, and would likely only get more convincing and more personalized as the technology evolved.

The dark side.

This dark side of generative AI’s capacity to democratize both creative and technical processes – to allow access to such disciplines, from coding to PR, to people who don’t know what the right answer is, has been elucidated by both technologists and political strategists. In fact, Dr Hinton himself, in resigning, referred to the capacity of generative AI to make it easier for authoritarian governments to instil their chosen reality in their people as fact.

That comes in the wake of the Chinese government clamping down on the development of GPT-stye platforms in China unless they are based on solidly socialist principles – ironically, though possibly not as ironically as Western governments would like to assume, on the basis that versions developed along capitalistic lines could be used to enforce a flawed understanding of reality, and win an ideological war of viewpoints.

All of this is fairly well understood, including by Dr Hinton, but in his media appearances explaining his resignation, he said his reservations about the technology went beyond any of the “practical” problems that running to embrace generative AI and its electronic offspring could bring.

A different way of learning.

“Right now, they’re not more intelligent than us, as far as I can tell,” he assured news outlets. “But I think they soon may be.”

The leading cognitive psychologist and computer scientist explained that the way AI chatbots worked made them superlative learning machines, and their uniformity of information processing meant that, for instance, once one system knew a thing, all similar systems could instantly learn it – like one generative AI consuming all the knowledge it takes to pass a law degree in a relative handful of heartbeats, and all generative AIs then behaving as though they had passed one.

“Right now, things like GPT-4 eclipse a person in the amount of general knowledge they have, and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that,” said Dr Hinton.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

The nightmare scenario.

Asked in a sense to contribute expertise to a global robo-psychosis, giving his nightmare scenario, Dr Hinton reacted simply. “Imagine a bad actor like Putin decided to give robots the ability to create their own sub-goals. Then imagine they created a sub-goal like ‘I need to get more power.’”

In leaving Google and making his statements about the kind of intelligence that humanity is currently actively running to create, Dr Hinton was clear that he had no grievance against the company, and actually had good things to say about the way that research had been run and conducted there, which “are more credible if I don’t work for Google.”

Jeff Dean, Google’s current Chief Scientist, said the company would remain committed to “a responsible approach to AI.”

The first ten minutes.

What that means in terms of any alteration to the way Google – and other leading AI research firms – push forward with their research is not likely to amount to a change of direction on the way AI is developed. It’s arguable that that’s a direction that can’t be changed at this point. Greater appreciation of the potential pitfalls – and the greater potential for bad actors to take well-intentioned scientific progress and use it for negative outcomes– might be helpful.

But if the warning of a top scientist who’s worked on the technology being ignored, or poorly implemented, or minimally understood looks familiar, that’s because it’s the first ten minutes of every science fiction disaster movie of the last 70 years. That doesn’t mean we’re destined to make mistakes – it just means we’ve been given guidance by a scientific authority on the road we’re treading, and where it could lead.

What we do with that information will determined what kind of movie we end up living in.