We need ‘action’ for responsible AI, says Microsoft

Microsoft says it's time for companies to ‘avoid a race to the bottom’ with artificial intelligence technology.
10 December 2018

Microsoft Corporation chief executive Satya Nadella speaks during the Viva Technology trade fair in Paris. (Photo by GERARD JULIEN / AFP)

Microsoft’s CEO, Satya Nadella, has urged companies in the private and public sectors to “take action” in building AI (artificial intelligence) responsibly, in order to avoid a “race to the bottom”.

Making the statement on Twitter, Nadella said, “we’ve seen how AI can be applied for good, but we must also guard against its unintended consequences”.

Nadella’s tweet linked to a blog post written by the company’s President and Chief Legal Officer, Brad Smith, which calls for government regulation and responsible industry measures in the face of the advance of AI and, in particular, the use of facial recognition.

According to Smith, it’s time for governments to start adopting laws to regulate the use of advanced technology. Putting the ‘genie back in the bottle’ will be much harder if technology services spread in a manner that exacerbates societal issues.

A commercial ‘race to the bottom’, as Smith sees it— whereby technology companies are forced to choose between market success and social responsibility— could be averted if a “floor of responsibility” exists that is governed by the rule of law.

The Microsoft exec argues that it’s time governments started getting proactive on the use of new technologies, citing “certain uses of facial recognition technology increase the risk of decisions, and outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination”.

Some of the issues that Smith alludes to come down the data sets fed into AI algorithms. If these data sets are misrepresentative of aspects of society they are designed to cater to, the AI will ‘learn’ and act on the misrepresentative information it has been given.

A study by MIT Media Lab on facial recognition software, for example, found that when a person in a photo is a white male, the software was right 99 percent of the time, while errors rose to 35 percent when a person’s skin was darker.

However, one widely-used dataset was estimated to be more than 75 percent male and more than 80 percent white.

Smith’s cautioning regarding AI’s use, however, don’t stop at their risk of discrimination. He also argues that the widespread use of facial recognition technology will lead to “new intrusions into someone’s privacy”, with governmental use for mass surveillance at risk of encroaching on “democratic freedoms”.

Transparency and third-party testing

For Smith and Microsoft, while there still exist many unknowns, transparency and third-party testing should play a central role in the development of AI technology.

Legislation should require companies to provide documentation that explains the capabilities and limitations of the technology for customers and consumers to easily understand it. This, says Smith, could be achieved with a simple API.

Meanwhile, companies should be engaged to conduct and publish tests of the facial recognition services for accuracy and unfair bias. In “consequential use cases”, where decisions may create a risk or bodily or emotional harm to a consumer— where human rights, fundamental rights, personal freedom or privacy may be under threat— a meaningful human review of facial recognition results can go a long way in ensuring fairness is being upheld.

Under new laws, technology companies should also comply with laws that prohibit discrimination against individual consumers or groups. Smith believes that ‘notice’ and ‘consent’, such as that employed for the use of consumers’ personal data following GDPR in Europe— should be made to provide privacy protection.

Fairness, transparency, accountability, non-discrimination, notice, and consent, as well as lawful surveillance, are the main tenets of assuring ‘responsibility’ in terms of facial recognition development, said Smith. These are areas Microsoft will be working on next year in the hopes that governments could follow a similar line.

Adding backing to Microsoft’s standing, a new report from AI Now Institute states that the implementation of AI is “expanding rapidly, without adequate governance, oversight, or accountability regimes,” and that facial recognition, in particular, is representing a key challenge for the public and governments.

The report issued several recommendations for law and policy-makers, including;

  • The expansion of powers of sector-specific agencies to oversee, audit, and monitor these technologies.
  • Stringent regulation to protect the public interest.
  • New approaches to governance.
  • The waiving of trade secrecy acts that stand in way of accountability in the public domain.
  • Technology companies required to provide protections for conscientious objectors, employee
    organizing, and ethical whistleblowers.
  • Consumer protection agencies applying ‘truth-in-advertising’ laws to AI products to avoid misrepresentation.
  • Technology companies committing to addressing the practices of discrimination in their workplaces.
  • A detailed account of the ‘full-stack supply chain’, for fairness, accountability, and transparency in AI.
  • More funding and support for litigation, labor organization, and community participation on AI accountability.
  • University AI programs expanding toward humanistic and social disciplines, beyond science and engineering.