Microsoft, Google clash on proposed EU facial recognition ban
Facial recognition is one of the most divisive topics in discussions around technology’s impact on society, and it seems there’s no precedent of opinion even among the world’s tech giants.
Reported this week, a leaked whitepaper from the European Commission (EC) outlines the initial workings of a temporary ban on the use of facial recognition technology in public areas for up to five years.
Obtained by EURACTIV, and due to be published in February, the document said a ban would offer regulators the time required to work out how to prevent facial recognition from being abused by governments and businesses.
On Monday, citing the possibility the tech would be used for nefarious purposes, Alphabet Chief Executive Sundar Pichai backed the announcement. Microsoft President, Brad Smith, meanwhile, accused a ban on facial recognition as heavy-handed.
“I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it,” Pichai told a think-tank conference in Brussels, Reuters reported.
“It can be immediate but maybe there’s a waiting period before we really think about how it’s being used,” he said.
“It’s up to governments to chart the course.”
Smith, however, touted the benefits of facial recognition, focusing on the use of the technology by NGOs in circumstances, such as in finding missing children.
“I’m really reluctant to say let’s stop people from using technology in a way that will reunite families when it can help them do it,” he said.
“The second thing I would say is you don’t ban it if you actually believe there is a reasonable alternative that will enable us to, say, address this problem with a scalpel instead of a meat cleaver,” he said.
“There is only one way at the end of the day to make technology better and that is to use it.”
In a blogpost on its AI products website, Google states “how useful the spectrum of face-related technologies can be for people and for society overall,” particularly in regard to protecting access to personal information and for “social good”, such as the use of face recognition to fight against the trafficking of minors.
However, it goes on to say that it’s crucial that these technologies are developed and used responsibly, needing to be fair– not enforcing existing biases; not supporting surveillance that violates internationally expected norms, and needs to protect people’s privacy, providing the right level of transparency and control.
“That’s why we’ve been so cautious about deploying face recognition in our products, or as services for others to use,” the firm adds.
Microsoft, on the other hand, has been less cautious. While it turned down a request from California’s law enforcement to use its facial recognition technology in police body cameras and dashcams on grounds that the technology could disportionately affect women and minorities, the company has sold its technology to at least one US prison.
In 2018, the firm was also forced to deny that its facial recognition technology was part of the services it provided to Immigration and Customs Enforcement (ICE).
But Google itself is not without controversy surrounding facial recognition technology– having reportedly used subcontracted workers to collect face scans from members of the public in exchange for $5 gift cards, targeting people with dark skin, including homeless people and college students, in an effort to enhance its facial recognition algorithms.
While Google has refused to sell its facial recognition technology, Microsoft has called for more careful federal regulation: “‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade,” Smith wrote in an open letter last year.
“But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”
The EC’s whitepaper follows the organization’s intent to bring more stringent, GDPR-like regulation to wider Artificial Intelligence (AI) development in Europe, which supports transparency, data privacy and control, and human-agency.
On the more hard-edged regulation on wider AI, Pichai urged regulators to take a “proportionate” approach, “balancing potential harms with social opportunities.”
The Alphabet chief suggested tailoring rules according to different sectors; self-driving cars, he said, would require different rules to medical devices, for example, adding that governments should align their standing and agree on core values.
But in contrast with Europe’s attitudes to regulation, the United States recently announced a framework of “light-touch” guidelines.
“Regulators must conduct risk assessment and cost-benefit analyses prior to any regulatory action on AI, with a focus on establishing flexible frameworks rather than one-size-fits-all regulation,” read a fact sheet issued by the Trump administration.
United States’ Chief Technology Officer Michael Kratsios wrote in a Bloomberg column that the US will continue to advance AI innovation based on “American values, in stark contrast to authoritarian governments […]
“The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation.
“Europe and our other international partners should adopt similar regulatory principles that embrace and shape innovation, and do so in a manner consistent with the principles we all hold dear,” he said.
Pichai has said it is important to be up-front about the negative potential of AI. While the benefits are huge, there are real concerns about ethical issues and potential for misuse.
In regard to ‘deepfakes’, video or audio clips which have been manipulated using AI, Google has released open datasets to help research communities detect and combat the issue.
Meanwhile, Google continues not to offer its cloud-based facial recognition APIs until policy and safeguards are established.
19 February 2020
19 February 2020
18 February 2020