Amazon bans facial recognition tech from police for one year
- Amazon is issuing a one-year moratorium on the use of its facial recognition technology by law enforcement
- The announcement comes after IBM said it was pulling its work with facial recognition tech, urging ‘national dialogue’
- Discussions around the technology have come to the fore in light of Black Lives Matter protests
Amazon says it will ban law enforcement agencies from using its facial recognition for one year, allowing US lawmakers time to introduce legislation to regulate the use of the technology.
The moratorium comes just a day after IBM said it would discontinue its development of “general purpose” facial recognition or analysis software in a letter to Congress, where it urged a “national dialogue” on use of the technology by police and government agencies like the Immigration and Customs Enforcement (ICE).
IBM cited concerts that use of the technology was unregulated, and could be used for mass surveillance and racial profiling. Despite advances in AI and progress toward representative data sets, facial recognition technology is widely seen to be biased along the lines of age, gender, race and ethnicity.
Amazon’s Rekognition software has been used by police departments in the US, and is one of the biggest players in the field, among smaller firms such as Clearview AI, which was told to stop using images from social media, and which also sells its tech to police agencies.
Talking to TechHQ, Paul Bischoff, Privacy Advocate at Comparitech.com called Amazon’s decision “welcome news” in light of the objectives of the Black Lives Matter movement: “at this critical moment in our history, now is not the time to empower police with the ability to identify protesters or restrict freedoms of movement and assembly.
“We need more regulation that stipulates how, when, where, and in what context police are allowed to use face recognition, and with whom the police can share face recognition data. Allowing police to purchase face recognition services without oversight could have serious consequences, both predictable and unforeseen.”
Long missing the mark?
Despite being a “leader” in the facial recognition market, demonstrations of Rekognition have shown that it lacks accuracy in identifying individuals.
A study in May this year found the AI software – which identifies individuals from their facial structure – incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots. In 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.
Discussion around the flaws of facial recognition gathered pace after the release of a study by MIT which found that technology offered by three major technology companies were, with error rates of just 0.8%, significantly better at accurately identifying the gender of light-skinned men, while for darker-skinned women, errors rates “ballooned” to more than 20% in once case, and more than 34% in the other two.
The major technology companies in that study all claimed accuracy rates of more than 97%, but the data set used to assess its performance was more than 77% male and more than 83% white.
That study has been cited regularly in the context of facial recognition technology’s shortfalls, but also in wider discussions around ethical AI and the societal dangers of deploying programs which have learned from biased data sets.
A separate study by researchers involved in MIT’s focused on Rekognition in particular, and uncovered similar results, which led to a rebuttal by Amazon officials in a series of blog posts last year. According to an extensive group of AI researchers last year who said the technology should not be in the hands of law enforcement, efforts by the firm to clarify its technology “misrepresented the technical details for the work and the state-of-the-art in facial analysis and recognition.”
Even before this, in 2018, Amazon employees have regularly expressed concerns about the technology. An anonymous worker claimed a group of 450 employees had sent a letter to Jeff Bezos and other executives demanding the firm stop selling “a system for dangerous mass surveillance” to law enforcement and institute employee oversight for ethical decisions.
“On stage, [Jeff Bezos] acknowledged that big tech’s products might be misused, even exploited, by autocrats,” wrote the anonymous worker. “But rather than meaningfully explain how Amazon will act to prevent the bad uses of its own technology, Bezos suggested we wait for society’s ‘immune response’.
“If Amazon waits, we think the harm will be difficult to undo.”
While Amazon has not confirmed whether federal agencies will still be able to use its technology, the firm said that within the next 12 months, it will continue to “allow” its use by organizations such as those using it to identify human trafficking victims and reunite missing children with their families.
Interesting, but not really that surprising. For the next year or so, the currently trained models are pretty useless – because masks.
If the cops were already acting on low quality matches, even @amazon can predict the legal (& thus commercial) outcomes of that.
And why… https://t.co/MSrU6Z1WbZ
— Phil Booth (@EinsteinsAttic) June 11, 2020