skip to content

The need for “Know your Customer” (KYC) has been on the rise due to increased global demand for digital services. In one of our previous articles we discussed facial recognition as one approach for KYC and beating facial recognition bias on African faces. In today’s article, we discuss anti-spoof in facial and ID cards verification, to prevent fraud in digital KYC.

Most companies use digital KYC to trust users with their valuables, which has enabled various companies to provide remote onboarding and services, such as opening bank accounts, registering mobile SIMs, ride services, housing services, etc. But often, in most cases, attackers will try to forge identities to try to bypass KYC technologies. Anti-spoofing is the ability to recognise if the identity presented is real or fake.

In facial verification, some attacks are simply photos downloaded from social media, but in other cases they could be more sophisticated than that. Below we display some methods used by attackers to spoof facial recognition systems.

Selfie Spoof in Facial Verification

Selfie spoof, is when attackers are trying to forge the identity of an authentic user using a selfie. There are various ways attackers could do this. Below we display some of these attacks. Faces are partially covered for privacy reasons.

Prints (paper/magazine covers) and soft-copies (screenshots)

In this attack, the attackers try to present a soft copy of a paper or a screen which has a front-face of an authentic user registered in the system they are trying to compromise. In some cases this type of attack is also tried in the effort of registering a wrong user in the system.

Soft copies and screenshots selfie spoof sample images (source: SmileIdentity)

Full and cut-out cardboards

In some cases attackers may try a big portrait to mimic human body width trying to spoof your system, hence using big cardboards and cardboard cut-outs.

cardboard spoof sample images (source: SmileIdentity)

Deep Fakes

Recent advances in Artificial Intelligence such as GANS (Generative Adversarial Neural Networks), have made it possible for real time face generating, swapping and transformation. This has resulted in new spoof attacks called deep fakes. Here, the attackers use a program to generate a single or sequence of facial images similar to the authentic user. Below we present a simple Obama deep fake, but the attack could get even more complex and sophisticated.

Deep fake — simple spoof (source: SmileIdentity)

Other

Besides, the discussed types of spoof above which are likely to be often used against facial verification systems, we have seen and dealt with various other types of spoof images including, but not limited to, deliberately blurred & darkened images, cartoons, non-faces, and pets.

Other types of spoof images (source: SmileIdentity)

ID-cards Spoof in Facial Verification

Most of the types of spoof attacks discussed in the previous section such as screen shots, photo-copies, blurred and darkened faces, and deep-fakes apply to ID-cards, see the image below (face and ID-card info are hidden for privacy reasons). The only difference is that for ID-cards, it becomes even much harder to discern since they are usually printed on papers and plastics. And in some cases, it makes them vulnerable to other types of unique attacks such as pasted-faces which we will discuss further in this section.

Various types of ID-cards spoof images (source: SmileIdentity)

Pasted Faces

When using pasted face spoofs, attackers try to swap an authentic face of an ID-card with a new fake face image. These attacks are often used to illegally create multiple accounts on a system or enrolling with fake information. These kind of attacks could go from just basic pasting of a photo to more sophisticated altered face. Below we show a few examples.

Pasted-face spoof images on ID-cards (source: SmileIdentity)

KYC Spoof Attacks Could be Massive

If not dealt with the amount of forged identifications and successful attacks on various systems which deal with customer identities could accumulate to big numbers and pose a great threat to user trust on your system. Not long ago, Meta Inc (previous Facebook Inc), mentioned that it deactivated 1.3 billion fake accounts in a period of 3 months. And it seems to be the trend per quarter.

Fake accounts taken down by Facebook each quarter (source: Socialmediatoday)

Also, PayPal admitted recently that 45 million accounts were illegitimately using their system. This shows the need and urgency to apply anti-spoof on every KYC process, from the beginning, rather than waiting for cases to accumulate.

How to solve the problem?

By leveraging recent advances in AI, and using customized best performing deep architectures and loss functions, at Smile Identity we have invested heavily in getting the right data, and using supervised and unsupervised anti-spoof learning to catch all types of spoof discussed and more.

We use supervised anti-spoof learning to catch known existing types of spoof in our datasets. This means that the machines are taught on a regular basis to recognise by examples various types of spoof attacks and catch them as they try to get into the system.

For unsupervised learning, we quickly identify potential new types of attacks that are newly tried by attackers against our partners. In this case our machine learning systems are set up to automatically learn new suspicious spoof image patterns on their own and put them in clusters for the human review. Once the suspected new type of attack is confirmed by a human review is then added to known types of attacks, and the process restarts.

Using our algorithms we have been able to prevent various sophisticated spoof attacks across various partners in Africa. Below we present percent of spoof caught, ensuring our partners safety, per total usage of our end to end KYC system.

Percentage of fraud caught per total number of KYC requests — 2021 (source: SmileIdentity)

Conclusion

Facial verification in terms of face similarity matching is not enough for KYC, it should be backed up by a strong anti-spoof system to ensure customer safety. Various types of spoof attacks that are likely to be used against facial verification systems have been presented, using real case spoof attack examples. This highlights how easily unprotected KYC systems could be attacked and bypassed.

We used facial verification as a use-case, but the same scenarios apply for other types of biometric verifications. Systems without a proper anti-spoof process are likely to accumulate massive fraud which can lead to various criminal activities, take down a company’s reputation and could cost a lot of resources to clean up at any later stage.