Why bans on facial recognition could stifle tech innovation
The idea of facial recognition applications in the public sector was once a seemingly impossible task. Early facial analysis software algorithms didn’t produce accurate or reliable results, systems were disproportionately inaccurate across certain genders and races, and the infrastructure required to power the system required a significant IT effort to maintain.
However, companies that provide facial recognition systems matured the technology significantly in the last decade — making facial recognition applications faster, more accurate and easier to operate. Significant investments in machine learning (ML) technology and other technical infrastructure made those gains possible. Now face recognition software self-learns because the creator painstakingly trains the algorithm with larger and more diverse data sets.
It leads to fewer error rates over a broader range of races and genders. And thanks to the democratization of ML from large tech behemoths like Google, Amazon, and Microsoft: Even small or medium-sized facial recognition providers can take advantage of turnkey toolsets and infrastructure to improve accuracy.
But as significant as advancements in the technology may be, there are still ethical conflicts around privacy and the use of facial recognition applications in the public sector. While Microsoft will not sell their solution to government buyers, Amazon has affirmed it will continue contracting with government entities as long as they “follow the law.” The company asserts a technology shouldn’t be banned simply because the potential for misuse exists.
The debate isn’t limited to the companies developing the tech. The increased interest in facial recognition by government agencies has raised the ire of watchdog organizations and citizens alike, who are concerned about the potential for human rights and privacy violations.
Clashing ideologies related to public safety applications and the potential for privacy violations has now come to a head in San Francisco. Earlier this year, city leadership voted to ban surveillance technology that uses facial recognition from being used by government agencies or police. The ban covers body cameras, toll readers and video surveillance devices as well as all iterations of facial recognition software and the information that it collects. While mass surveillance seems to be the intended target of the ban, the ordinance as written restricts any technology that uses or creates biometric data.
It’s clear that city leaders had the right intentions with this action — the city does, after all, represent progressive ideals and is seen as the center of technology innovation. However, the ban is so broad that it might very well stifle the innovation the city is known for and restrict opportunities to improve public safety in the process.
Banning facial recognition outright is a mistake
Facial recognition has valuable and life-saving potential if deployed correctly, and it is incumbent upon government agencies to take the lead in crafting smart legislation and ethical frameworks for the use of the technology.
Innovation at Departments of Motor Vehicles (DMVs) across the US are examples of how this technology can be applied in a non-invasive manner. Many agencies currently use the technology to reduce identity theft and prevent the issuance of fraudulent IDs. However, bans like San Francisco’s could threaten helpful applications like these or at the very least, pose costly legal challenges.
We have seen the success of facial recognition technology firsthand in the public sector. At one state government agency, workers identified 173 fraudulent transactions over 12 months using facial recognition at a DMV location, and nearly 30 percent of these were attempting to steal another resident’s identity using stolen information. Given the pervasiveness of the threat, all-out bans on technology like facial recognition represent a step backward for public safety.
Government leaders need more education to prevent bad legislation
The San Francisco ban caused a surge of interest concerning the way facial recognition and the technology that powers it works. That’s a positive development. The public and watchdog agencies deserve clear, honest answers to their questions. If technology companies can deliver transparency coupled with accurate results, it will start building trust. Ultimately, a proper balance of privacy and public safety is possible if both sides are willing to engage.
So, let’s clear up some major misconceptions about how facial recognition works. For one, the technology is not a binary, definitive system for identifying suspects and criminals. The words “Facial Recognition” say everything you need to know. It’s a probability; it is not a legal identification of a person like a fingerprint. Law enforcement and judicial courts will use fingerprint biometrics to identify people. You will never hear that “identification” language in facial recognition.
Instead, it acts as more of a lead for law enforcement, yielding a match probability that is then analyzed by a trained professional to conduct a broader investigation. At the end of the day, the technology doesn’t serve as the final adjudication in a criminal case.
Additionally, facial recognition technology may yield inaccurate results if not properly trained. Realistically, the only way to allow these systems to improve is with more data. Machine learning can help software and algorithms improve with repetition and iteration over time. But to reduce the potential for inaccuracy, developers need the ability to test this technology in the real world in non-criminal use cases (e.g., point-of-sale authentication in a school cafeteria).
It must be made clear to both government agencies and the public that this technology is neither a total remedy for public safety nor a completely nefarious tool for privacy invasion. Rather, the technology as it exists can act as a helpful guide for law enforcement to gather leads, identify patterns and aid officers. And there are non-criminal use cases for the technology and developers are launching those solutions every day.
Facial recognition technology needs room— and time— to mature
Before any more bans are considered, leaders in the public sector must educate themselves further on the issue, and support regulations that harness the best of the technology while preventing misuse. In doing so, they act in the best interest of the constituents they have sworn to protect. These systems still require time and data to mature. It’s critical that we as citizens — as well as our leaders — understand the technology and the potential value it can create before instituting short-sighted bans.
This article was contributed by Kevin Freiburger, Director of Identity Programs, Valid.
17 October 2019
17 October 2019
17 October 2019