How The Threat Of AI Is Now, Not Later, And The Way To Fight It
Many years ago, Steve Martin did a comedy bit on Saturday Night Live in which he joked about how people thought he was crazy for thinking that robots were stealing his luggage. But all these years later, he’s almost right. Robots are stealing a whole lot of stuff. Maybe not luggage, but they steal enough money to buy all new luggage.
Robots, you say? Not stainless steel ones, but digital. Generative Artificial Intelligence is the most incredible productivity tool in the world – for criminals. Before we could even enjoy the idea of commonly available AI for five minutes, we were already panicking over all the damage it can do. Fake videos showing politicians and celebrities saying things they didn’t really say. Fake porn of famous singers. Large numbers of phony entities delivering misinformation in an attempt to influence our elections. And of course fraudsters.
AI is too smart for our own good. We already had a massive fraud problem being perpetrated by script kiddies and more sophisticated bad actors, but AI super-powers them. AI can generate and act on lists of the most common passwords, thus exploiting people’s inherent laziness in creating strong credentials. AI can inundate citizens with prompt-bombing, phishing, and smishing attacks. AI can even have conversations over text. Fraud was already occurring on an industrial scale, but now it allows a single hoodie-wearing crook to achieve industrial scale by himself.
We log into our banks, social apps, and other services with passwords and devices. This means leveraging stuff we know, and stuff we have. But bad guys can discover what you know, and they can steal what you have. This accounts for 75% of all fraud. It’s not network intrusions, it’s just consumer imitation. US companies will suffer $66 billion by 2028, and we’re already two-thirds of the way there. This is all because criminals get past common defenses with fake ids and fake selves.
So how do we present our identities, our credentials in a way that cannot be repudiated, duplicated, or otherwise compromised? Ourselves. Biometrics, or a personal signal based on your very carbon self. Leading identity consulting firms are insisting that biometrics, in conjunction with physical id documents, be part of any verification scheme going forward. But AI is already doing its worst to muck that up as well. Those same deepfakes that are meant to embarrass, misinform, or entertain people with poor taste can also be used to break into people’s accounts. Deepfake faces and voices can pretend (and very well) to be those people and steal their stuff, while AI-generated identity documents, in the form of phony licenses and other docs, are increasingly difficult to detect.
So let’s ask again: how do we present ourselves so that the system on the other side knows it’s really us? A laminated fake id can fool even the most experienced club bouncer. Then comes the biometric signal. Voice is a tough one. It’s just anonymous, faceless sound on the other end of a microphone. Apple decided to deprecate touch id, which aces out fingerprints in many cases.
But facial biometric requires two aspects: a legit face and a delivery mechanism. As a bad robot, I deliver voice by making noise. But I deliver face by …. oh. That fake face needs to be injected, and this is where a solid biometric verification and authentication platform combats evil AI.
This is where authID crushes fraud. First, that physical id. No matter how good that fake id is, there are any number of security markers that can be examined for authenticity. We can tell if the barcode is legit even before it’s decoded, and once it is, it provides us even more evidence to evaluate. We determine in automated fashion, and in milliseconds, if the document has been issued by the DMV or other authority it claims to be from. And we apply liveness detection, to ensure it’s a live id being presented by a live person, not a picture, a printout, a screen, or a robot. Once we know the id is good, we can also validate the data we pull from it.
Then the selfie. That face needs to be verified, and the way it shows up needs to be verified as well. We again employ industry-leading Liveness Detection to ensure that the face being delivered comes from a real, live person. Not a picture, not a screen, not some shmoe wearing a mask. AI faces aren’t 3D, so they can only show up digitally. We can catch the fact that a face isn’t a face but a digital representation, flat and fake, then give them the boot in favor of all those legitimate consumers who just want to bank, interact, vote, communicate, or do whatever business they need to do. That’s what true liveness is all about.
Finally, we match the face from that id to that live person we verify through liveness. Only when our platform determines that the id is good, the person is good, and they’re good together do we give the transaction a gold star. We’re providing the assurance to our customers that their users are the real deal, and not AI-generated fakes. Because in the end, you only deflect the fraud so you can build your business with the good ones that you can confidently onboard because they’ve been verified as bona fide.
AI robots don’t pass liveness (or the other checks we perform) because robots aren’t live. They’re not us. So we can’t let them steal our stuff.
Give us a ring, so our friendly faces can help you use your own friendly face to protect your most precious digital assets. authID has the right stuff, to make sure you keep your stuff.