Biometric standards and testing build trust in passwordless future

Long term success of passwordless identity verification depends on solutions built on biometric standards and subjected to testing.
11 May 2023

Passwordless future: digitally-signed biometric vectors could make it simpler for users to access services and help to reduce fraud. And industry standards and vendor testing are making sure that solutions live up to expectations. Image credit: Shutterstock Generate.

Getting your Trinity Audio player ready...

• Users can view algorithm performance comparisons on the web
• Vectors don’t contain real-world data to boost security
• Biometric standards and testing provide a foundation for developers

Passwords are a pain. If you pick something memorable, there’s a good chance that the secret will be easy for bad actors to guess – for example, by looking for the name of your pets or relatives (and other commonly used password prompts) on social media. Password manager apps allow users to apply stronger policies and randomly generate expressions. But extra tools add to the workflow and can put everything in one basket. Consumers want a simpler way to prove their identity, which is why biometric approaches are growing in popularity. But there are important differences between biometrics and passwords. And the unique nature of biometrics makes it important for vendors to follow industry standards and for products to undergo rigorous testing.

A password is (or should be) a secret, but faces, voices, and even fingerprints are everywhere and can’t be protected in the same way. Biometric identity solutions need to be able to differentiate between a copy and the real thing. And because biometrics can’t be reset, users need to be protected from having their identifiers stolen and used elsewhere. “It’s very important to choose biometric providers that use certified and tested technology,” Mikel Sánchez, Business Intelligence and Innovation Director at Veridas, told TechHQ.

Biometric testing and standards

Vendors can submit their algorithms for testing as part of official programs such as the Face Recognition Vendor Test (FRVT) operated by US national standards agency NIST. The program examines how reliable facial recognition algorithms are at recognizing the same person and identifying imposters. And the ongoing work has recently been expanded to include challenges such as distinguishing between twins, which further raises the bar.

Having access to test results strengthens user confidence in the reliability of biometric solutions, and today’s algorithms perform well against common industry benchmarks. Most biometric engines will be capable of comparing two images of the same person and classifying them correctly 99% of the time, which surpasses human performance, according to research. And when it comes to distinguishing between different people, the error rate is just 1 in a million.

Biometric standards and testing help to bolster security by defining the performance assessment of presentation attack detection (PAD) mechanisms. Vendors need to incorporate PAD mechanisms into their biometric products to prevent bad actors from spoofing user data – for example, by showing a photograph rather than presenting the real face. And a huge amount of work has been carried out by vendors to incorporate so-called liveness detection into their systems.

Security labs such as iBeta carry out PAD testing in accordance with ISO/IEC 30107-3:2023 – the international standard that defines how mechanisms should be evaluated and reported on. And examinations account for different kinds of adversaries, from beginners with little experience to specialist attackers with access to more sophisticated equipment such as a 3D printer and latex masks.

And as trust in a passwordless future grows, so do the number of applications for digital identity verification and biometrics authentication. For example, Veridas’ clients include financial services providers who use customer voice characteristics to enable a quick and reliable security check.

Customers first enroll their voice by calling a telephone number and speaking for just a few seconds, and then the system generates a biometric vector that’s unique to them. Then, when customers next speak to their bank, the system can automatically authenticate them – avoiding agents having to spend extra time asking traditional security questions.

“The system is text and language independent,” explains Sánchez. “Callers don’t need to repeat the same phrase for enrolment and authentication.” Today’s algorithms are so effective that a customer could enroll speaking in one language and would still be correctly identified if they switched to another for authentication. “You can enroll in English and authenticate in Spanish,” Sánchez confirms. “The system compares the inner feature of your voice, not the actual words.”

The biometric vectors that are created have a number of important properties, which protect users. For example, they don’t contain any real-world data, which means that operators can’t recover voices or facial images from the information. Also, each vector is tied to specific services through digital certificates so that enrolled biometrics are only valid for selected activities and can’t be misused elsewhere.

Today, biometric solutions are being used to rent vehicles, protect minors from gambling, allow retirees to receive their pensions remotely, speed up airport security, and allow fans into sporting events – to give just a few examples. “Biometrics offer a better user experience,” said Sánchez.

Combatting deepfakes

Just as artificial intelligence (AI) plays a role in extracting key biometrics such as voice features, it can also help to combat deepfake attacks – a rising potential threat as voice cloning and other tools become more widely available. One tell comes from the way that an attacker uses a synthetic voice to impersonate someone. “We can detect that the audio is coming from a loudspeaker and not someone’s vocal cord,” Sánchez comments.

There are other giveaways too, that the audio has been generated synthetically and isn’t being spoken by a human. The characteristics of our vocal tracts put a limit on how quickly spoken sounds can change. But machines don’t follow these same rules and can generate vocal snippets that would be impossible for humans to reproduce.

The combination of security defenses together with biometric standards and testing is building trust in a passwordless future.