Over the last two years of the covid-19 pandemic, the convenience that came with society’s digital transformation has unfortunately been matched by an increase in online fraud.
With password and data breaches becoming more common, many financial apps have turned to biometric face checks, matching consumers’ selfies against IDs or previously enrolled photos to ensure that only legitimate users are signing up for accounts or accessing services online.
The rise of “selfies” in the mid 2010’s and Apple’s introduction of FaceID in late 2017 made consumers more comfortable with face recognition as a medium.
Technology previously limited to high tech security services and science fiction movies was now available in everyone’s pocket. But the rapid adoption of face recognition tech has also resulted in new kinds of fraud. Cybercriminals are getting increasingly good at hacking face recognition.
In the simplest version of the attack, the criminal presents a target’s photo during a face check, fooling the app into unlocking access to the victim’s accounts and personal data.
Face recognition tech is vulnerable to abuse if it cannot ensure that a live person is in front of the camera. In this blog, we highlight the alarming rise in biometric crime, the emerging threat of “face spoofing”, and the role that “liveness detection” plays as an effective countermeasure.
Face Recognition: From Novelty to Commodity
Face recognition is now everywhere — in our laptops, our phones, at work, and at home. The technology has matured from niche status into a world-wide commodity in just the span of a decade.
Driven by consumers’ expectations of convenience when navigating their increasingly connected and digital lives, global demand for biometric technology like face recognition continues to grow at an exponential pace.
Source: Grandview Research Biometrics Technology Report
In some areas of the world, biometrics has grown beyond convenience and into a necessity. In Africa users can perform critical financial services like banking using mobile apps that feature biometric authentication. In the US, the Internal Revenue Service recently investigated requiring face checking before allowing tax-payers access their on-line returns. And India has nationalized biometrics to provide all 1.3 billion of its citizens access to government services like voter registration with a fingerprint or face.
So What’s The Problem?
This aggressive expansion in biometrics has fueled global concerns over surveillance, data privacy, and ethical use. And as the laws mature surrounding biometrics, we will see varying regulatory approaches develop in different parts of the world. These differences aside, the pervasiveness and wide applicability of face recognition has made it a popular target for hackers and criminals everywhere.
So, how easy is it to break vulnerable face recognition systems? Let’s look at a few of the clever techniques criminals have devised to fool face checking on devices and in apps.
Here, you see a device about to get unlocked with an image of the owner’s face presented from a different device:
Here someone is unlocking a device using a print out of their face:
Source: ICCV Antispoof Challenge
All of these so-called “cheap fakes” pale in sophistication to the latest breed of spoofs called “deep fakes,” which are convincing spoofs produced by specially trained AI. Generating high quality deep fakes is still beyond the means of most ordinary cybercriminals, but well within the scope of state-sponsored criminal organizations. Although there have been concerns that a well-timed deep fake could spark conflict or war, most of the fakes in 2021 were predominately cheap ones.
So, how does a criminal acquire an image of someone’s face? Besides searching popular sites like Google or LinkedIn, criminals can mine the images from social media posts. If criminals can’t locate any on public sites, they could purchase images of their victim’s faces on the dark web. The dark web’s data markets specialize in the illegal trade of private information, often extracted from hacked databases. In one of the worst data breaches in recent years, criminals exposed 28 million records of private biometric data, which included fingerprint and photos of faces.
Liveness Detection To The Rescue
Clearly, apps that leverage face recognition must also ensure that a real live person is present during a face check. Failure to do this leaves these apps vulnerable to face spoofing and subsequent fraud. Liveness algorithms complement and augment face recognition, resulting in robust face checks that users and app developers can trust.
So, how do these algorithms work? Many apps that leverage liveness detection capture a short video of the user during the face checking process. Alternatively, some apps grab a quick succession of shots while the user performs a specific motion or gesture in front of the camera. The liveness detection algorithm processes the image or video with specialized computer vision, acquiring mathematical certainty that a real human was present during capture rather than being the result of artificial production.
Some algorithms will also check for a “replay attack”, that is, a video recording of a real user presented from another device close by. There are tell-tale signs embedded in the captured video or images that reveal a criminal has presented photos of faces or pre-recorded videos.
A physical mask, even if it passes a human motion or gesture check, leaves evidence of its synthetic nature across the captured image’s color channels.
The best algorithms implement multiple levels of defense, and pick up on the subtle clues left behind by the spoof attacks.
At SmileIdentity, we have been working on face recognition and identity management solutions since 2016. We have seen the rapid evolution of biometric crime, the constantly evolving sophistication of criminals, and we have evolved our custom liveness checking technology to stay ahead of the ongoing threat of face spoofing.
Our liveness check technology is called SmartSelfie™ and its powered by 6 AI-based anti-spoof models. SmartSelfie™ensures our customers that criminals can’t present fake or stolen images of faces or videos in order to commit fraud against their victims.
We constantly measure ourselves against our own privately curated database of face spoofs similar to the ones shown above, including selfie photos or printed posters. In our last evaluation, SmartSelfie™ detected spoof attempts with an accuracy of 98.8%.
And, we continually train our algorithms. As new data arrives, our human review team flags new spoof attempts, then we incrementally update our AI-based anti-spoof models. This ongoing learning and re-training keeps our service up to date with the latest types of spoof attacks as they start to surface.
In addition, we have optimized our face verification algorithm by training it on the ethnic variety of our user base in Africa. By doing so, we virtually eliminate face match failures that would likely be interpreted as fraud by other systems. At 99.8% accuracy, we beat Amazon Rekognition’s performance at 96%, measured on a public dataset comprising a balanced set of black and white faces.
Without a guarantee of proper liveness checks, such as SmileIdentity’s SmartSelfie™, an app’s face checks are vulnerable to face spoofing. If you are an app developer looking for an identity management solution that implements face verification with state-of-the-art liveness detection, please visit our website for more information, or reach out to our sales team for a demo.
In addition to face spoofing we have seen firsthand the rise of other type of criminal activity. For example, we’ve noticed an uptick in fraudsters trying to use the same identity information to sign up for multiple accounts at the same time. In our next blog, we will talk about how we use a technique called “deduplication” to thwart this kind of costly fraud.