Are the threats around AI exaggerated?

IBM's David Kenny says we should stop fretting about AI as potential threats are over exaggerated. Is he right?
4 June 2018

Elon Musk is a tad concerned about AI taking over the world. Source: AFP.

Elon Musk has warned of AI creating an ‘evil dictator’, while the late-and-great Stephen Hawking said the technology could ‘end the human race’.

Neither perspectives are great endorsements for the technology that is revolutionizing almost every industry.

Yet, David Kenny, IBM’s senior vice president of Watson and Cloud has said such warning are, quite simply, ‘not helpful’ and that the technology is already improving the world we live in.

“It’s making things safer in cybersecurity, it’s helping doctors and nurses and patients better find health care, it’s helping people be compliant and manage their tax codes, so I see all these great benefits from it. And I hate statements that make people afraid because I think that’s not helpful,” Kenny said in a phone interview with CNBC.

It’s been a long time in the making but AI is finally having a strong impact, and a lot of it is positive.

It was over two decades ago that IBM’s Deep Blue beat chess champion Garry Kasparov, but since then AI has evolved to help us do everything: from diagnosing cancer earlier to automating the dull and boring tasks that make the government and the private sector more productive.

At the same time, much of this has created anxiety about the future of work.

The US Bureau of Labor Statistics predicts that government workforces will see almost no job losses between now and 2024.

However, a recent study by Deloitte UK and Oxford University suggests that up to 18 percent of UK public sector jobs could be automated by 2030.

Which figure will be closest to the truth remains to be seen.

Kenny did concede that AI needs to be underpinned by a set of principles and needs to incorporate transparency and trust to mitigate potential risk.

“I actually believe that if businesses and builders of the AI platform really adopt principles or trust and transparency they can get all the goodness without the risk,” he said.

Whether this happy equilibrium can be achieved continues to be tested.

For example, questions were raised about the ethics of Google working with the US Defense Department on a project analyzing drone footage.

Critics are concerned the technology could be used at some point to help target and kill specific groups of people.

On the other hand, the military says the technology will only be used to increase the ability of weapon systems to detect objects so one analyst can do two to three times as much work as they are now.

Perhaps Kenny has a technologist’s typical attitude of optimism about the future of automation and AI, or he is in fact correct; either way, a healthy dose of caution and skepticism never hurt anyone.

AI reaching the level of consciousness and reasoning exhibited in TV shows like Westworld and others that Musk and Hawking warn about is yet to be proven, though few now doubt we will reach this milestone, eventually.

However, leaders and technology firms still have the opportunity to find ways to regulate out risk. We are now well and truly on the AI train, we just have to make sure it is heading in the right direction.