Facial recognition – the journey to mainstream?

The future of facial recognition is very much opt-in.
21 December 2022
Getting your Trinity Audio player ready...

In our extended look at facial recognition technology, we’ve been talking to Terry Schulenburg, VP of Business Development at CyberLink, a leading facial recognition company, to find out whether the time is right for the often-maligned technology to go mainstream in the US. In Part 1, we examined whether it was ready for use in security settings, and in Part 2, we discovered some more commercial uses for the technology – and that once it’s in place, it often proves its worth. To conclude our examination, we asked Terry to close the circle for us, and sketch us a world in which facial recognition is universally accepted.

The killer applications.


Are there a handful of “killer applications” for facial recognition technology, where it will be adopted easily?


For me, the banking industry is a perfect example of one, and the insurance industry is another. In both cases, the system knows who I am. It has my data. Facial recognition just helps the user through the process faster and easier.

We’re doing it at airports every day. If you’ve been to an airport recently, you put your driver’s license in the device, and it compares the driver’s license to your face. That’s a one-to-one facial recognition match, but it’s not storing anything, which is where a lot of the public’s fear comes from – that idea that your face is being stored and can be used to blow your life apart. It’s just looking at your driver’s license, looking at your face and saying “Yep, that’s the same person.” And if that’s a legitimate ID, then everything’s good.

Those are things that are coming and those are the things that people are completely accepting of. Because of course, most people are not trying to get around the law. They’re not trying to sneak on an airplane as somebody else. Most people are legitimate. They’re trying to get through their day as smoothly as possible, and in cases like airports, they want to know that all the people getting on their flight have been verified through the same process.


In Part 1, we explored the idea that there’s a disconnect between what people “know” about facial recognition from movies and TV and what’s actually, really possible. Is there also a disconnect between what they know about it from when facial recognition was first implemented decades ago, when it was nowhere near as sophisticated?


No question about it. You’re exactly right. Even people that tried it 10 years ago… it was horrible. Just to give you an idea, the first system I put in was 15 years ago, in Chicago. And it was about 60% accurate. On a good day. When all the lighting was good, it was about 60% accurate. Everyone said “Oh, 60%, that’s great!”

Until they tried to use it, and realized that a facial recognition system with 40% inaccuracy was pretty much unusable, because it was literally no definitive use to anyone.

The fallibility of the eye.

Now? We’re up to over 99% accuracy. Actually, CyberLink today is at 99.8% accuracy, according to NIST, the National Institute of Standards and Technology. They do this measurement every year, and we’re at 99.8%. There are five or six companies that are better than us, with accuracies of 99.81% and 99.83%, and they’re Russian and Chinese companies. You look at that and say “Okay, how much better can it get? Will it reach 99.9? I don’t think so. But it’s worth considering what we can do at 99.8%

Think about your eyes and my eyes.

Think about twins.

The first time you meet identical twins, how difficult is it to tell them apart? Usually, it’s next to impossible — you have to find that little something that makes them unique before you and I as human beings with human eyes can tell the difference.

My software can tell the difference between identical twins. It will find that tiny nuance, and it knows the difference. First time.

So our technology is 99.8% acceptable. It works. It works flawlessly. It works in medical facilities, and it works in nuclear plants. It works for people who are dealing with anthrax and incredibly crazy drugs. Things you really don’t want unauthorized people getting their hands on — it’s being used for two-factor authentication there, because the people who run those facilities need to know that if you pass through that door, you belong there. You’re not going to fake it out with somebody’s else’s mask or pass.

“Your mission, should you choose to accept it…”

You’ve seen Mission Impossible, right? The Ethan Hunt character creates some pretty incredible masks in those movies to fool facial recognition cameras.

They wouldn’t be able to fake out our software. We have something called anti-spoofing that reports back and says “Listen, that’s a photograph. We’re not letting you in based on a photograph, we’re not letting you in based on a mask that’s even close.” They call it Mission Impossible for a reason. We’re the reason.

Our software is looking at every wrinkle on your face, every pore. Depending on the camera, I can get you incredibly good results. And when you’re talking about security, when you’re talking about securing a facility, a business, an airline, any of those things, facial recognition is better than any other form of identification you’ll ever have.

The trans debate.


“Your mission, should you choose to accept it, is to get hopelessly caught.” That’s come a long way from what people are probably thinking of. You’ve said it can spot someone correctly from a 15 year old photograph (which makes sense for the airport use case). How does it deal with trans people?


It doesn’t matter. And of course, nor should it. That’s the coolest part about this. In the old days when we would compare photographs, African Americans, anybody with dark skin, had trouble, because there was a lot of what are called “shadows” on the face.


Facial recognition while black…


Yes. And where you have shadows on the face, it was really difficult to get that detail you needed. And so that’s why for years, all of the facial recognition was really geared towards white people. Anybody outside of that – African Americans, Asian Americans, anybody that wasn’t white, it wouldn’t work with.

It does now.

What’s happened is that we’ve shifted from actual images to AI mathematical models. It doesn’t matter if you’re trans, your face hasn’t changed enough to confuse our software, because we’re using those mathematical models, not any external elements that people would see. Unless the bones in your face have been changed a lot, unless you’ve had plastic surgery to modify that structure, it’s not going to matter, we’ll still recognize you.

Again, the coolest part about this is it’s all done on consent. So if you’ve had radical gender affirmation surgery, or radical plastic surgery, or even if you’ve been in a major accident, you might want to go back and re-enroll your face, so, for instance, your face matches your true name, rather than your deadname. Then the system would work based on your “new” face. But if I was looking for somebody from the past, and they hadn’t had that sort of facial work done, I could find them.

Stopping traffic.

We’re doing this a lot right now with the anti-trafficking industry, because trafficked people are checking into hotels all the time. The anti-trafficking industry would love to be able to get an alert saying “You know that woman that you’re searching for in three states? She just checked in to this hotel.”

And it would be great to be able to capture those traffickers and get the trafficked people out of those situations, so we’re working with some of those people to go through this process.


No way you’ll answer this, because there has to be confidentiality in the process, but can you tell us anything about the points that the system looks for to verify people?


Ha. Yeah, that’s our secret sauce, so I can’t give you specifics. But I’ll share this. There are eight or nine public domain facial recognition engines on the internet, you can go download and use them today. They’re trained on very similar public models. And they have maybe 100,000 faces in there that they’re trained on.

Face in a million.

Our system takes it a few steps beyond that. Our artificial engineers have found some very specific things, so our software not only detects the face, I can tell you how old you are, I can tell you how you’re feeling right now, whether you’re happy, sad, neutral, all of that kind of stuff. And based on how your face is presented, we can get your age within useful limits — we’re actually being used in 11 states right now to do tobacco sales and verification at vending machines. If you’re not over 23 years old, our system will not let you buy tobacco at the vending machine, because it’s verifying the face and the age of that person, even by how they walk up to the machine.

There’s a lot going on there, and we’ve trained our software on over a million faces. We’re checking over 2700 points on a face, we recognize every inch of every crevice –


Which is why Ethan Hunt’s masks won’t work.


Exactly. If we use a very high-resolution image, we can do a one to 1 million match on you and guarantee that it’s you.

The future of facial recognition.


So where do we go from here with facial recognition?


Right now, it’s an edge case game. In the United States right now, I’m equipping stores, I’m equipping places where they’re starting to offer face-pay to their VIPs, people that want to be part of the program, and you’re going to see this pick up very quickly.

I’m actually equipping every bar in Australia with facial recognition technology, because in Australia, one of their biggest problems in bars is fighting. And right now, all they’re doing is checking IDs as people enter the door. They don’t know if you were there when the fight occurred, they don’t know anything. So now they’re going to be using facial recognition at the door, capturing everybody’s face when they come in and when they leave. So they know it was 11 o’clock at night, and you weren’t even there, the don’t need to bother you about this fight.

But Australia has a much higher acceptance of facial recognition than the United States. I think we’re two to three years behind most other countries.

But everybody that sees it, sees the value, once you break the notion that what they see on TV is not what’s real. We’re not after faces in the crowd. We’re after the people that want to be a part of the system, that understand that nothing nefarious is going to happen with their face, but that it can give them a better quality of day.

Going active.

We’ve added something this year called active facial recognition. So at the stadia in Qatar, when you have 80,000 people rushing in the doors, you’re using what’s called passive facial recognition, just looking at every face and capturing every face as it comes in the door. In Illinois, and California, and in Europe that is illegal, you’re not allowed to scan someone’s biometrics without their knowledge. So what we’ve done is we’ve added active facial recognition to our system. Active facial recognition now, when I walk up to the door, and I swipe my card, it says “Okay, Terry is that you?” And if I bow my head, it will scan my face. Otherwise, it will not scan my face.

And it gives us the ability to help health clubs. A lot of health clubs want to use facial recognition because people forget their cards all the time. The staff don’t know if you’ve paid, but they’ll just let you in anyway. So people are walking through the door and being let in on a regular basis. They’d like to use facial recognition to check that.


Useful. A couple of ghoulish ones to finish?


Hit me.

If you want to get ahead…


Going once more back to the fiction, there’s the idea that if you move to a biometric system, say handprints or retinal scans, the criminals have to… take the hand or the eye to get through the doors they’re not supposed to get through.

Does your software differentiate between live faces and dead faces? Or, in the event of extremely dedicated terrorism, if someone took the head of the authorized person, would it fool the system?


The answer is no… but maybe so. We do have a “liveness check.” What that means is, we take six frames from a video camera. Six frames out of 30 frames a second. That give us an idea if you’re a living person or if you’re fake.

Now, if I walk up with a head in my hand and carry it up to the camera, technically we will not know the difference. We don’t have any idea if you have a heart or a pulse, all we know is that it’s not a flat image, it’s not a mask or a picture of a person, it is a real face. And we do expect you to have your eyes open. So if the eyes are closed, we can’t do it, and you would not be given access.

So if you were trying to open up an ATM that uses facial recognition, and you just needed their head, you’d still have to get their eyes open, they would still have to blink. So there would be a liveness check going on, saying “They never blinked through that whole time. Let’s have them smile.” And so our system would actually prompt you to please turn your head to the right, and please stick your tongue out.

If you could perform those with a head detached? Yes, you could fool the system. But I don’t especially like your chances.

Schools and malls


Hey, you sit down with a facial recognition specialist, there are some things you absolutely have to ask. You mentioned earlier that you could use the system to not only tell how old somebody is but their mood at the time?


That’s correct.

The potential of infrared cameras for the future.

The potential for facial recognition with infrared cameras.


Is there a way to use it to spot when people are feeling violent? Because that might be handy at school gates and mall doors and venue turnstiles, no? If you could stop anyone who was feeling extremely violent before they got in?


I get asked that question every single day. Can I tell if someone’s going to commit a crime? Can I tell if somebody’s under that much stress?

I can’t.

What I can report is whether they’re happy, sad, neutral. We don’t have a lot of emotions in the system. Stress is one I get asked for on a regular basis. Employers tell us they’d like to know if an employee comes back to work stressed out, so maybe they can get them some help. And it makes perfect sense, what they want to do with it. It’s not trying to try to pick people out of a crowd, but it is one of those things that I get asked for on a regular basis. And it’s just that we haven’t trained our models well enough to do that yet. We’re looking at it though. We’re looking at those capabilities.


Something for the future, then?


Absolutely it is, and it and there are ways to tell. The way the veins in the face are and things like that. There’s a lot that you can tell with an infrared camera. I can tell if your temperature is rising, I can tell a lot of stress information. So I imagine we’ll be able to test for a lot of those things in four or five years.