It’s time to start regulating facial recognition technology
When it comes to artificial intelligence (AI), there is no shortage of hyperbole in conversational topics related to it. Elon Musk, for example, has been quoted as saying it’s “more dangerous than nuclear weapons” and it could lead to an “immortal dictator”.
Others like Reddit co-founder and venture-capital investor Alexis Ohanian, rubbishes Musk’s claims, by saying “Elon is writing a great screenplay for a Black Mirror episode.” He does acknowledge that we should be worried about the use of facial recognition in instances like the pervasive network of intelligent surveillance cameras dotting urban centers in China, equipped with facial recognition technology that tracks criminals and ordinary citizens alike.
That brings us to the topic of regulation, and more specifically how it should be applied in an emerging technology field, where the advantages and the privacy invasion drawbacks could be equally as potent. In a report not too long ago from AI Now Institute, the influential research institute had pinpointed facial recognition technology as a key challenge for the public and governments.
Deep learning is being touted as the main catalyst behind the rapid development of facial recognition technology. The report goes on to suggest that the US government needs to improve regulation in this area. “The implementation of AI is expanding rapidly, without adequate governance, oversight, or accountability regimes,” the report states.
As AI becomes more pervasive in the fields of health, education, criminal justice and welfare, the report recommends that the government bodies be roped in regulate AI issues as all these domains have their own hazards, histories, and regulatory frameworks to contend with. The clarion call for more responsibility focuses on stronger consumer protections. The report suggests technology companies should “waive trade-secret claims” when the accountability of these systems is under scrutiny.
In most cases, the public would have no idea if facial recognition is being used to monitor them. According to the report, they should be warned of the surveillance and be given the right to reject the use of it in their lives.
Facial recognition services are now being used to unlock phones, enable payments, check-in for flights, and even for the US Secret Service to develop a facial recognition system for the White House. Worryingly, facial recognition systems have also shown an inherent level of bias.
One example of this technology still in its infancy, can be traced back to 2018 when the ACLU utilized Amazon’s Rekognition platform to compare 28 federal lawmakers with 25,000 mugshots that were readily available. A huge furor erupted after the software matched them incorrectly to criminals.
The software mistakenly identified African-American and Latino lawmakers, and racial bias seems to be prevalent in other commercially available facial recognition systems as well. Another trend that the report highlighted focused on ‘emotion tracking’ in face scanning and voice detection systems.
Discriminatory practices such as tracking the emotion of students is also being carried out, even though the whole idea has not really been proven. The co-founder of AINow, Kate Crawford, is adamant that such practices are not scientific or ethical. Speaking to industry press, she stresses that it’s time to regulate facial recognition and affect recognition.