Biometric security handy, but easy to spoof

While biometric data may be unique – or at least rare – it’s not secret, which users should keep in mind when considering how their systems are secured.
2 August 2022

False impression: biometric data can leave systems vulnerable to attack if additional authentication factors are omitted. Image credit: Shutterstock.

Fingerprint scanners have become commonplace on smartphones – the same devices that have become the gateway to internet banking, telemedicine, and a whole suite of enterprise apps full of sensitive business information. Remembering passwords is a pain and so when an alternative solution hits the market, it’s no wonder that alternatives for proving identity take off so quickly.

Touch era

Apple unveiled its ‘Touch ID’ hardware in 2013 on the iPhone 5S, and the fingertip-sized 500 DPI capacitive pad has been a familiar feature on its iOS products since then. The US firm wasn’t the first device-maker to have a phone with a fingerprint sensor, but it was one of the first to implement it well enough so that fingerprints could be read reliably. The achievement is due to a combination of the hardware, the algorithms powering it, and the training process – users have to press their digits against the sapphire surface from a number of angles during setup so that the necessary biometric details can be learned.

Components compete for space on modern smartphones, which would make a full-sized fingerprint reader too much of a squeeze for designers. Instead, the sensor is kept small and configured to capture portions of a user’s fingerprint rather than the whole thing – a win for minimizing the real estate required. But, as researchers have pointed out, these smaller portions captured by the sensor are not as distinctive as a full fingerprint – raising the chances of an incorrect match.

By classifying sections of the fingerprint as arches, loops, or whorls, the security sensor is able to ‘codify’ the user’s skin pattern into a digital format. Interestingly, Apple notes that while it’s rare for sections of separate fingerprints to be alike – it’s not impossible, and the firm is upfront in stating that the probability of this happening is 1 in 50,000. For comparison, the lock on the front door of your house might have double that number of key combinations depending on the number of pins and pin heights of the mechanism inside the casing.

Handful of attempts

To protect users, smartphone makers limit the number of fingerprint attempts that can be made before defaulting to a password request. But what if somebody has captured your fingerprint and isn’t relying on chance to fool the sensor? Device makers are keen to talk up the fact that biometric data never leaves the hardware, but users touch other surfaces all the time, leaving potential clues that suitably motivated adversaries could grab.

More than 20 years ago, researchers showed that a replica fingertip made out of gelatin patterned using glass-gathered fingerprints (that had been photographed and etched into a copper mold) could be sufficient to fool some sensors. More recently, in 2013, security researcher Jan Krissler further raised awareness by showing how a smudge on a mobile phone could be enough to betray a user’s biometric details and gain access via the fingerprint sensor – if armed with some wood glue and conductive carbon powder. It’s another reason to keep your devices close to hand, but what about fingerprints left on view in public?

Found objects

In 2014, Krissler famously recreated the fingerprint of the then German defense minister and now president of the European Commission – Ursula von der Leyen, by snapping pictures of the politician’s thumb at a press event. The stunt drew attention to the fact that while biometrics may be unique, or at least rare, they are not necessarily secret. All this explains why they don’t make a perfect replacement for passwords, despite the convenience of doing so. And, if you weren’t already convinced, security researchers provided a reminder in 2020 when an industry team showed that it was possible to defeat biometric locks integrated into laptops between 60% and 95% of the time, depending on how the fingerprint had been obtained.

Other biometric setups are vulnerable too. Iris cameras have been fooled with photographs; so have facial recognition systems – as Dutch Consumer group Consumenten Bond showed in 2019 (26 out of 60 smartphones tested could be unlocked with a good quality portrait photograph of the user, according to its findings). And if fingerprints left in public pose an issue, then facial images are going to be even harder, if not impossible, to constrain given their availability on the internet thanks to the boom in social media.

Raising the bar

Not all devices are so easy to fool. Apple writes that its Face ID algorithm is based on a ‘randomized sequence of 2D images and depth map captures’ to help combat digital and physical spoofs. But it’s still vulnerable to individuals who look alike and won’t stop criminals from turning the phone on their owners if they are unfortunate enough to be victims of crime.

And then there’s artificial intelligence, whose appetite for learning holds no bounds. Presenting their work at the ‘IEEE 9th International Conference on Biometrics Theory, Applications and Systems’, security experts based at universities in the US, showed how so-called ‘deep master prints’ – biometric-like images that fortuitously match a large number of other fingerprint-based data – could undermine identity verification [PDF].

What’s become clear is that biometric data is far from being the security rock that many choose to believe. As far back as 2016, global standards-setting giant ISO published a framework for biometric presentation attack detection, acknowledging that, “Biometric data can be easily obtained directly from a person, online, or through existing databases and then used to create spoofs (or fakes) to mount an attack.”

The cornerstones of authentication haven’t changed – namely, the use of ‘something you know’, ‘something you have’, and ’something you are’. But relying on a single factor alone, rather than at least two factors (see NIST’s Digital Identity Guidelines SP-800-63-3 [PDF] ) – especially one that comes with some notable limitations, is leaving some users exposed.