Why responsible AI is everyone’s business

There is no conspiracy, we are all responsible and accountable for what happened next in tech.
15 February 2019 | 1688 Shares

Marc Raibert, Founder of Boston Dynamics, TechCrunch Moderator Darrell Etherington and Spot at TechCrunch Disrupt London. Source: TechCrunch/AFP

Looking back at our analog past through rose-tinted glasses of nostalgia, it was our governments that led the way forward with technological innovation rather than private tech companies.

Both the internet and space exploration were originally funded by government money, but somewhere along our digital transformation journey, we now find ourselves in a world where large tech corporations are calling the shots.

In a digital age where space travel, autonomous vehicles, and artificial intelligence (AI) are changing the world as we know it, the lines between private technology companies and our governments are already blurring.

Fears of a big brother future recently prompted 85 human rights groups to reach out to Microsoft, Amazon, and Google urging them to think of the implications of selling facial recognition software to government agencies.

However, Microsoft President Brad Smith told Business Insider without any hint of irony or self-awareness that he could be misquoted when he advised it would be “cruel in its humanitarian effect,” to stop government agencies from using facial recognition software.

But who will regulate how this technology is used?

AI Pandora’s box

The potential problems on the horizon will be caused by a lack of any single point of accountability. The tech guys will blame the government and vice versa. But rather than pointing the finger of blame at how either leverage sophisticated technology, we need a global debate around the risks of playing with dangerous toys with limited understanding of their capabilities and future implications.

The real problems occur when we feed algorithms with human bias and discrimination. Now that the AI Pandora’s box has officially been opened, the implications will inevitably affect every member of the global community. With privacy, democratic freedoms and human rights at stake, techies are beginning to question the effects that the technology they helped pioneer could have on society.

Canadian computer scientist, Yoshua Bengio, recently highlighted concerns around China’s use of AI for surveillance and political control. As police forces across the UK begin to use algorithms to predict crimes, human rights campaigners are warning of biased decision making from big brother style thought police.

Recent news that popular genealogy website FamilyTreeDNA will be sharing its DNA data with the FBI also hit the headlines for all the wrong reasons too. The inconvenient truth that law enforcement only needs 2 percent of a population’s DNA to match anyone in the country, combined with the realization that an innocent person could be dragged into a criminal investigation because a cousin has taken a DNA test, should be seen a wake-up call.

However, many users are increasingly becoming aware of the importance of privacy as the “if you have nothing to hide you have nothing to fear,” argument is retired. A perfect example of this change in thinking was highlighted by the suggestion that Facebook’s 10-year challenge was a manufactured narrative so that data could be mined to train facial recognition algorithms on age progression and age recognition.

The cost of AI technology

There is no such thing as a free lunch. As we feast on a free digital smorgasbord of unlimited email, messaging and seamless online sharing of photos, we forgot the adage of “If you’re not paying for the product, you are the product.” In a world where tech is regularly labeled as the bad guy, it’s important to remember that it’s not a technological inevitability that got us to where we are today, it’s a very human creation.

Despite the repeated warnings that we are heading towards a digital dystopia, the reality is many of the current capabilities of AI, facial recognition and machine learning are exaggerated. For example, Amazon’s face recognition technology could be deemed comically or ironically inaccurate when it incorrectly matched photographs of US Congress members to mugshots of suspected criminals.

If we look back to the infamous moment when Microsoft unleashed Tay, once again, it was its human audience that ensured things quickly turned sour. The naive teen-talking AI chatbot was corrupted by the online community and tricked into becoming racist and sexist within just a few hours of being deployed. Maybe the technology and devices that dominate our lives are also black mirrors that provide a dark reflection on modern society.

What if there was no conspiracy? Exaggerated claims and apocalyptic news stories are hiding the fact that AI in itself is nothing to be afraid of, it’s just math. Essentially, algorithms hoover up vast amounts of data to process and find correlations, recommendations and in some cases decisions at the other end. The problem is our own inherent bias and discrimination that is being added to algorithmic decision making.

Researchers are currently exploring how to remove human bias from algorithms. Thankfully, we still have time on our side to debate the big questions, and the implications technology will have on society and business. The only way we can progress as a global community is to stop playing the blame game and accept our collective responsibility.

There is no great conspiracy, and nobody actually wakes up thinking they are the bad guy. The tech creators, governments, businesses, and every user all must play their part in both the accountability and responsibility when using technology that has the potential to change the world as we know it for better or worse. But maybe that reality is even more complex than the technology itself.