Businesses, policymakers ‘misaligned’ on what ethical AI really means

It's hard to agree on a solution if you can't agree on the problem.
1 September 2020

A facial recognition passport gate at Heathrow airport. Source: Shutterstock

  • Collaboration between the public and private sector will establish a more robust and holistic AI governance framework
  • A study by EY found discrepancies in the perspectives of AI ethics between public organizations and private firms

From autonomous vehicles to virtual assistants, artificial intelligence is becoming increasingly present in our daily lives, and yet we are really just at the beginning of the curve. 

A powerful, transformative technology though it is, dealing with vast amounts of data, applications are already triggering unease in the public and the continued adoption of the game-changing technology must be balanced with heightened scrutiny towards policy, regulation, and ethics. 

The need for more stringent oversight is demonstrated by the increasing reliance we place on this technology in our daily lives — in the case of driverless cars, we’d be placing our lives in the hands of AI. But it’s also demonstrated in use by businesses and organizations. 

In the case of law enforcement, Flaws, or incompleteness in the data used by facial recognition systems in law enforcement, for example, can lead to racial profiling or misidentification of suspects, or add to the sense of an invasive surveillance culture at best.  

Tackling concerns like these is becoming a priority. Since 2016, more than 100 ethical guidelines have been published by governmental bodies, multi-stakeholder groups, academic institutions, and private companies. 

But for progress to be made and these efforts to be successful, all groups must be aligned. Coordination between stakeholders is critical to developing enforceable and standardized policies and approaches to governance that reflect the pace of AI development. 

That’s not the case, yet. According to consulting firm EY, findings from a global survey revealed diverging opinions between the public and private sectors when it comes to the regulation and governance of AI technologies.

In collaboration with independent think tank The Future Society, the report analyzed discrepancies between the public and private sectors’ use of AI, as well as the approach to policymaking. The global study gathered responses from over 70 policymakers and 280 private organizations spread across 55 countries. Asking for their views on ethics in AI, respondents ranked ethical principles based on importance across 12 different AI use cases.

One of the use cases discussed was biometric facial recognition. The Bridging AI’s Trust Gaps report pointed out that the use of AI for facial recognition check-ins, where cameras are deployed for faster and smoother check-ins at airports, hotels, and banks, is problematic. 

Policymakers rated “fairness and avoiding bias”,  such as the misidentification of individuals, as the top priority for this application of the technology, followed by “privacy and data rights” and “transparency.” 

Among private firms, however, the number one concern was different. These companies identified “privacy and data rights” as their number one worry.

While this is just one example, experts from EY have remarked that the substantial misalignment in points of view between the public and private sectors poses a huge risk to the business landscape, as a focused approach between the two in relation to ethical AI is absent. Policymakers and firms need to unite and collaborate in truly defining ethical AI and must work together to narrow the existing gap.

EY global markets digital and business disruption leader, Gil Forer said, “As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks.”

Forer continued: “Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.”

Creating a vibrant and prosperous tech hub, benefiting both enterprises owners and consumers, will only happen when companies adopting AI technologies place ethics and regulations at the forefront of their plans, rather than as afterthoughts.

Ed McLaughlin, president of operations and technology at Mastercard, told TechTarget, “Enterprises should build explainability, privacy, and security into their models from the start. Companies need to ensure they are benefiting consumers who entrust them with their data and layout easy-to-read, understandable data privacy policies.”

As the role of AI continues to advance, EY shared three pointers for how policymakers and private organizations can begin better collaboration and alignment:  

# 1 | Consult and deliberate 

Consult and deliberate. Take approaches that align stakeholders’ interests and are technically practical. Open consultation with a range of stakeholders is key. 

# 2 | Proceed with appropriate speed 

AI is moving fast and ethical concerns are real. Policymakers need to move quickly — but also carefully, with flexible and adaptive approaches. 

# 3 | Align globally

International coordination is needed to tackle issues consistently, mitigate global risks, and learn from leading countries.