One rite of passage for baby boomers was faking their way into a bar while underage. If you couldn’t borrow somebody else’s license that kinda looked like you (and which would fool the bouncer) then the alternative was getting a fake id. This typically meant modifying an existing id, with an exacto knife and a fine-tip pen. And again, there’s a bouncer.
With the advent of laminating machines available on Amazon, impostors could create their own fake ids. With hardware and materials, it would cost somewhere around $400 for a decent id that would get you into a bar. But it might not get you a bank account.
But now since anything you could possibly want is a credit card and a web hit away, fake ids are easier than ever. And with generative artificial intelligence, the basis for a fake id is the highest possible quality. They’re scary good.
In the old days, meaning twenty years ago, we sweated hackers in their basement rooms in their hoodies figuring out how to get past the firewall. These guys evolved into “script kiddies” who could download rootkits that enabled them to breach even the best defenses with minimal skill. But now with AI, any moron can do it. “Hey ChatBotGuy, tell me how to break into the bank’s site.”
Or, if you wanted to go through the front door instead of the back … “Hey ChatBotGuy, create me an image of a California driver’s license using this name and address and my high school yearbook photo as the basis.” And bingo, perfect fake id. Your face, but a phony identity. This can be enough to get you verified for a fintech account. So cheap, so phony.
The underground site OnlyFakes (which has since gone under) was providing very cheap, very realistic AI-generated fake ids for 26 different countries, for only $15. That’s all it cost. Select a document type, upload your headshot, and you could get yourself through even the compliance checks for all the major crypto sites. Present the fake id, then present your face, which matches the picture on that id.
These aren’t just fake ids. They are deeeeep fake ids. Whether purchased or homegrown, these can be very authentic looking, and very good at passing standard modes of detection. Creating an identity that is not based on identity theft (i.e. pretending to be someone who doesn’t really exist) is called synthetic fraud. It’s a synthetic persona. Having a physical id, such as a license or passport, to back up that synthetic identity is a powerful tool for fraud, and all at a discount price.
It seems like we’d barely started getting excited over all the cool things AI could do when we started freaking out over the potential damage. It’s fairly ironic that AI is being put to use for fighting the very fraud that it is helping propagate. It’s like the old WC Fields joke, “I always keep a bottle of whiskey handy in case I’m bitten by a snake, which I also keep handy.”
Synthetic fraud, cheap and easy.
An additional aspect of this: maybe the fraudster doesn’t want to use his own face. OnlyFakes allowed customers to have pictures generated for them. For that matter, generative AI allows anybody to create faces, very realistic ones, that aren’t actually anybody in particular, and put those faces on fake ids. And then they can present that face as the match for the id when they apply for online accounts. How can the good guys stay ahead when the bad guys have access to such cheap and evil tools?
When an applicant applies for an account on any site that uses document verification (a process that validates a physical id and matches the image on it with a selfie, they are supplying just those two things, an id and their face. The presumption is that if the id is rejected, the face is rejected, or if they don’t match, the fraudster doesn’t get through. But with the power of AI in their grubby paws, plenty of those bad guys do make it through. And they’re doing it in a couple of ways.
First, a presentation attack, meaning they are presenting false images or fake ids. Second, injection attacks in which they insert fraudulent information into the validation process.
That’s where authID comes in.
Our solution looks at every aspect of an identity transaction as an opportunity. We have several chances to enable a legit applicant to access the digital assets: validate that physical id document, validate the live person, then match them together. Of course, these are also opportunities to uncover fraud. And we do all of this through a multi-layered approach:
- authID prompts for photos of the physical document and the user
- We perform both client side and back-end analysis of the images
- We perform this analysis within 700 milliseconds, the fastest and most accurate process on the market
- We detect deepfake images via digital signing of images, and pull apart physical documents multiple ways to determine their viability
- We employ NIST PAD 2 liveness detection to determine if the selfie is provided by a live human
- We stop injection attacks, meaning preventing fraudsters from inserting fake images into the process thru hardware, software, or network attacks
- And all this happens with a simplified, friendly user experience that accommodates even the least tech-savvy individual
The techniques applied by these bad actors might be cheap, but the damage they do is the opposite. Identity fraud losses topped $43 billion in 2022. It’s easier than ever to deepfake a way to being somebody else, and easier than ever to deepfake a way to being a completely fake person.
We used to need to be smarter than the bad guys. Now that even dummies can be bad guys, we have to be smarter than the tools they use. Combatting cheap deepfakes requires intelligent defenses that can spot those fakes, while still frictionlessly letting through legitimate applicants. Any dummy can afford to commit fraud. The rest of us can’t afford to let them get away with it.
Give us a call at authID and find out why we are a leader in biometric authentication that detects deepfakes in their tracks to stop fraud and account takeover, while offering the fastest, seamless user experience.