How Google navigates between good and evil AI

CEO Sundar Pichai recognizes AI is a powerful technology and promises to approach developments with humility and a deep sense of responsibility.
11 June 2018

Sundar pichai, CEO, Google, talking about AI. Source: AFP

There’s no doubt that artificial intelligence (AI) can do wonderful things for humanity.

From communicating with other machines to help us build a more comfortable living environment to supporting the autonomous vehicles that will transform what mobility means for the human race, AI can do it all.

However, it’s also a technology that can do a lot of harm. Whether it is to take away existing jobs by automating repetitive tasks to sending unmanned drones into war zones and target a certain individual or group with precision, AI is also making people fear it.

Elon Musk, one of today’s leading technologists, says that “AI is the biggest risk we face as a civilization”

Google, however, recognizes that the technology, though powerful, must be developed – for the good of people and society.

In a blog post, it’s CEO Sundar Pichai said, “Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.”

“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” added Pichai.

He, speaking for Google, announced seven “concrete standards” that will actively govern their research and product development and will impact their business decisions.

AI applications that Google reviews hereon, will:

1. Be socially beneficial

Google expects AI to play a transformative role in healthcare, security, energy, transportation, manufacturing, and entertainment.

In fact, AI enhances our ability to understand the meaning of content at scale. It’s why the company hopes to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms.

Google also intends to “thoughtfully evaluate” when to make their technologies available on a non-commercial basis.

“As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides,” explained Pichai.

2. Avoid creating or reinforcing unfair bias

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.

Google recognizes that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies.

Through the use of AI, the company intends to avoid the unjust impact of sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief, on people.

3. Be built and tested for safety

Google aims to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

“We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research,” promised Pichai.

Where appropriate, the company will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people

“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control,” said Pichai.

5. Incorporate privacy design principles

“We will incorporate our privacy principles in the development and use of our AI technologies. We will give an opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data,” suggested Pichai.

6. Uphold high standards of scientific excellence

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration.

Google recognizes that AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences.

Hence, the company aspires to high standards of scientific excellence as we work to progress AI development.

“We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications,” explained Pichai.

7. Be made available for uses that accord with these principles

The company’s CEO recognizes that many technologies have multiple uses.

Hence, the company will work to limit potentially harmful or abusive applications. As they develop and deploy AI technologies, they will evaluate likely uses in light of the following factors:

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have a significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

Further, the CEO announced that the firm will not pursue AI that can cause harm or injury, or help surveille or contravene widely accepted principles of international law and human rights.