It’s said that Helen of Troy was so beautiful that her husband dispatched an entire fleet to retrieve her when she was spirited away by Paris, thus starting the Trojan War. That’s a pretty powerful image: a human face that launched an entire armada of proxies to accomplish a task.
Non-human accounts have been around a long time. They drive printers, execute database queries, move data around. I used to design solutions that created and enforced restrictions on these kinds of processes to avoid them being compromised by bad guys.
Now AI agents are even more ubiquitous, launched by human users under whose authority they operate. But with human identities getting compromised, the agents they launch can likewise be compromised. Somebody impersonating Joe can take advantage of Joe’s AI bots. These virtual agents act as proxies for their human initiators, performing tasks, cranking up productivity, and operating in the background. And while they’re phenomenal productivity accelerators, clearly they’re also risks.
In an exercise that’s being called “vibe hacking,” a bad actor used an AI chatbot to research, infiltrate, and extort 17 different organizations. The AI agent was utilized to find paths into victims’ systems, then further leveraged to create malware for identifying and stealing sensitive data, calculate ransom demands relative to each organization’s financial posture, and generate extortion demands specific to each company.
These agents are bound to their human masters. But what binds the user to an absolute definition of identity that ensures that they are who they say they are, and only they can kick off these agents?
Usernames and passwords are not technically “bound” to a user’s identity. They are simply strings that can be stolen, and often are. But what is bound to that identity is a proof of life, in the form of a facial biometric. It’s vital to ensure that only a user’s face can speak for them, that no phony or spoof or replay or deepfake can claim that identity, and subsequently launch those agents for nefarious purposes.
authID’s IdX solution empowers organizations to establish user-based accounts through facial biometrics, then place them in a central registry along with their AI agents, and bind them together with immutable keys. IdX already locks down all the accounts for employees, contractors, vendors, and third parties across the enterprise and the many apps they need to do their jobs as they navigate the larger organization. But now it can do this for the AI proxies, who are really just extensions of the human ids.
Just as the people resources themselves, the AI agents must be issued verifiable credentials within the IdX registry. When it’s time for them to be used, they must be invoked by a biometrically-verified user who must also exist in the registry. This invocation not only invests them with the authority to act, but also provides an audit trail of who launched them. There is no true OOTB way to do this. Until now.
These agents can’t be compromised, because their humans can’t be compromised. We bind these agents to the humans we verify. And because of that binding, we also have an audit trail, and can trace any AI actions back to their original owners. Total AI agent security and accountability.
With authID’s biometric platform, your face can launch a ton of productive work, including through your digital stand-ins. What we do is make sure that only your face, your live, present face, is the one doing it.
We’re authid.ai. We are the authentication of id for AI. Talk to us about the future of your AI efforts, and keeping them safe through biometrics.