EU privacy debate rages regarding facial recognition firm Clearview AI

1 June 2021

Privacy concerns are rising about the use of facial recognition software by police forces. (Photo by David Becker / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Privacy organizations last month complained to regulators in five European countries over the practices of Clearview AI, a company that has built a powerful facial recognition database using images “scraped” from the web.

Clearview’s use of images – including those from people’s social media accounts – to offer biometrics services to private companies and law enforcement “goes far beyond what we could ever expect as online users”, Ioannis Kouvakas, legal officer at Privacy International (PI), said in a statement.

While Clearview touts its technology’s ability to help law enforcement, its critics say facial recognition is open to abuse and could ultimately eliminate anonymity in public spaces – pointing to cases like China’s massive public surveillance system. Facial recognition has also been attacked for failing to distinguish non-white people’s faces and women as well as it can identify white, male images – potentially leading to false positives.

Alongside three other digital rights organizations, PI has filed complaints with data regulators in France, Austria, Italy, Greece, and Britain. “We expect them to join forces in ruling that Clearview’s practices have no place in Europe, which would have meaningful ramifications for the company’s operations globally,” PI said.

In February, Canada’s privacy commissioner found that the firm’s activity “is mass surveillance and it is illegal” under the country’s privacy laws. And British and Australian privacy watchdogs last year launched a joint probe of their own. “Just because something is ‘online’ does not mean it is fair game to be appropriated by others in any which way they want to – neither morally nor legally,” said Alan Dahi, a data protection lawyer at Austrian privacy group Noyb.

Clearview AI came to public prominence in a January 2020 New York Times report that detailed how it was already working with law enforcement, including the US FBI and Department of Homeland Security. On its website, it boasts the “largest known database of 3+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and many other open sources”.

Founder Hoan Ton-That acknowledged to the NYT at the time that Clearview AI was breaching the terms of service of Facebook and other social media sites by gathering users’ photos. Facebook, Twitter, Google parent Alphabet’s YouTube, and Microsoft’s LinkedIn have all protested against Clearview’s practices.

And last year, tech firms including Microsoft and Amazon suspended sales of facial recognition software to police forces when confronted with 2020’s Black Lives Matter movement. Amazon last week extended the original one-year moratorium “until further notice”.

But “tools for conducting facial recognition are widely available” even outside the big tech companies, according to journalist Nicolas Kayser-Bril, who compiled a report on the technology for advocacy group AlgorithmWatch last year that found at least 10 European police forces using it.