It’s a staple of the Mission Impossible franchise – Tom Cruise and his crew use advanced latex 3D face masks to impersonate someone else and enter a building, vault, or other restricted areas. This used to appear so futuristic and truly only the stuff of Hollywood movies.
Tom and his crew were giving us all a glimpse at what an impersonation attack is. Putting on a face mask to pose as another individual, commonly referred to as “spoofing,” is an example of a presentation attack. These are attempts to imitate the unique biometrics of a victim (such as their face, fingerprint, or iris) in order to interfere with the intended policy of the biometric system.
Presentation attacks are a growing problem. Why? Long before the COVID-19 pandemic, many institutions started moving to a “digital-first” approach to onboard their new customers remotely. No longer relying on outdated and vulnerable methods such as knowledge-based verification, they increasingly asked their customers to upload a picture of their photo ID plus a “selfie” to prove their identity. Comparing and matching those pictures of the face provides the confidence that the person taking the picture is indeed the legitimate applicant signing up.
But, to paraphrase another movie classic, fraudsters evolved as well, rendering comparison between the photo ID and “selfie” insufficient. Cyber criminals became interested in stealing “selfies” so they could combine them with other personal information and fraudulently open up bank accounts or take out loans. These pictures are out there. For example, the biometrics giant Suprema’s breach exposed millions of pictures of people in combination with their facial recognition data, and the 2019 Customs and Border Protection ransomware attack exposed the facial images of 184,000 travelers. These breaches are particularly worrisome as it’s easy to change a stolen username and password but impossible to change your face. It’s as if you put all this time and effort into building this huge fortress to protect yourself but then your parents lose the spare key you gave them, with your address on the key ring. Thanks, Mom and Dad.
Making matters worse, breaching biometrics databases is not the only method criminals use to launch presentation attacks. In their 2020 Fraud Report, Onfido warns of fast-developing biometrics fraud, including the use of 2D and 3D masks and deepfakes to imitate a victim. A recent study found that even the most state-of-the-art facial recognition systems can be tricked by advanced deepfakes 85 to 95% of the time. Experian expects to see more “Frankenstein faces,” using AI to combine different people’s facial features to create new synthetic identities.
Given these rapidly evolving biometric fraud threats, remote onboarding needs to change as well. With presentation attacks on the rise, financial institutions, companies, and government agencies can no longer rely on just biometric comparison by matching a photo ID and “selfie.” Robust liveness detection measures need to be put in place. Liveness detection technology can tell if the person signing up is present and alive – and not a static picture, mask or deepfake. How does it do this? Different techniques are used in liveness detection, ranging from asking the user to blink to prove they are alive to using complex algorithms to assess if an image is actually the user’s face. Liveness detection is also getting smarter to match the fraudsters’ evolution, utilizing techniques like machine learning.
No longer confined to just Hollywood movies, presentation attacks are becoming ubiquitous (as are the number of Mission Impossible films), so to keep up with the threat, existing facial recognition systems need to be bolstered utilizing advanced techniques, like liveness detection, to continue to protect users and their trust.