All Blogs

Ethical Biometrics High Road Should Be Easy, Yet Many Identity Service Providers Struggle

By Tripp Smith, President & CTO, authID

Artificial intelligence and biometric authentication (Biometric AI) when combined create a powerful tool for establishing and protecting identity, and biometric data, analyzed and processed using AI, holds great promise for creating a more secure digital ecosystem. In contrast to passwords, biometrics are unique and non-transferable, and biometric identity authentication is more convenient, making it ideal for securing accounts, protecting companies, and promoting privacy and cybersecurity.

There are ethical considerations regarding biometric AI, however. Using individuals’ biometric data fairly and ethically should be straightforward, yet several companies are in the spotlight for questionable practices.

ID.me, Clearview AI, and Onfido have run into legal and political trouble for their approaches to using biometric data. As these companies’ actions draw the attention of policymakers and the courts, it will hopefully lead to better policies for and deployment of ethical use of biometrics to protect rather than exploit consumers.

Potential ID.me Investigation

In May 2022, Senators Ron Wyden, Cory Booker, Edward Markey, and Alex Padilla urged the Federal Trade Commission to investigate ID.me1 for allegedly misleading the government about its facial recognition service. The senators said that ID.me appeared to mislead consumers about how the company used its biometric data and that ID.me’s statements regarding the use and storage of consumer biometric data may have influenced officials at state and federal agencies in choosing an ID verification provider for government services.

Since early 2021, ID.me claimed in blog posts and white papers that it used only “one-to-one” facial recognition technology, according to the senators. In January 2022, after the backlash against the IRS’s plans to require facial recognition through ID.me, the company’s CEO published a statement emphasizing again that ID.me “does not use 1:many [one-to-many] facial recognition,” calling it “problematic” and “tied to surveillance applications.”2 Two days later, ID.me’s CEO admitted in a LinkedIn post that ID.me uses one-to-many facial recognition as part of its identity verification process.3

Reports have since emerged that ID.me struggled with security lapses, lax verification practices, and staffing issues as it scaled operations after securing dozens of contracts with the IRS, Social Security Administration, and state unemployment agencies.4

Clearview’s Lack of Consent

Biometric surveillance company Clearview AI has had its biometric data-capturing methods described by the ACLU as a “Privacy Nightmare Scenario.”5 The New York Times reported that the company amassed a database of billions of photos by scraping the internet, including Facebook, YouTube, Twitter, and Venmo. It did this, according to the ACLU, unbeknownst to users, and offered paid access to the database to private and government entities worldwide.

The company’s primary business model was “capturing untold quantities of biometric data for purposes of surveillance and tracking without notice to the individuals affected, much less their consent,”6 according to the ACLU. Clearview AI settled with the ACLU and agreed not to sell its database of 20 billion facial photos to private entities, but it may largely still do business with federal and state agencies.7 Clearview announced a new product called “Clearview Consent” in May 2022,4 presumably as an alternative to their unconsented products.

One-to-many facial recognition technology like Clearview AI’s also has trouble distinguishing Black and Asian faces, leading to biased results, a study by the National Institute of Standards and Technology found.8 In 2019, a New Jersey man spent 10 days in jail after Clearview AI9 facial recognition software wrongly identified him as the suspect in an aggravated shoplifting incident.

Onfido on the Hook

Onfido, an identity verification provider that uses facial biometrics derived from photos to identify users against their government-issued IDs, is another company that has had difficulty navigating the law around ethical biometric AI. In April 2022, Onfido was denied a motion to dismiss complaints under the Illinois Biometric Information Privacy Act (“BIPA”) in which the plaintiff alleged he was not informed that his photograph would be kept in a database by Onfido.10

Passed in 2008, BIPA requires biometric providers to inform the user what information is collected, how the information is going to be used, and obtain consent. Onfido claimed that “neither photographs nor information derived from photographs is covered” by BIPA and the data they were collecting could not be a “scan of face geometry” because it involved only “a scan of a photograph of [the plaintiff’s] face.” The court found that biometric identifiers do include photographs under BIPA, and the litigation is still pending.

Illinois’ BIPA provides a template for lawmakers seeking to provide guidelines around how biometric technology should be developed, deployed, and utilized. BIPA originated when a company that let customers pay in stores with their fingerprints went bust and it considered selling the fingerprint database. BIPA was a response to the use of biometrics without informed consent and allows individuals to take action and hold companies accountable when their privacy rights are violated.

What Does Ethical AI Look Like?

Rapid digital transformation has led to rampant identity fraud and account takeovers. It is imperative that we provide employers and customer-focused enterprises the ability to safely secure their users’ identities and accounts using the latest, most advanced technologies like biometric AI. But this should not come at the expense of privacy, or wrongly disadvantage portions of the population.

Ethical biometric AI is a core pillar of authID’s values and standards of practice. We built our foundation for ethical AI on a few guiding principles:

– Privacy-First Authentication. We only obtain and use biometrics with explicit informed consent in the context in which it was provided.

– Your Biometrics Belong to You. We see biometrics as a powerful tool for the user to protect and safeguard their identity and only their identity with a specific organization or enterprise. To use a simplified metaphor, you allow us to use your car keys to unlock your car, when you ask us to; we don’t find a set of keys and try to unlock every car in the parking lot.

Biometrics Should Be Inclusive and Equitable. We use precise, quantifiable controls to provide the same excellent experience to all users regardless of their age, gender, skin tone, or other characteristics.

Opt-in, Not Opt-out. Biometrics are a powerful tool for frictionlessly securing your identity. Opt-in flows within our product educate the user on the benefits and limits of our biometrics. In addition, our Dynamic Enrollment feature enables users to select whether they want to use biometrics or an alternate authentication method.

Biometric AI is an extremely effective way to establish and protect consumers’ identities and foster a more secure digital landscape. This is why it is important to put legal and policy guidelines around using biometric AI. Companies looking to enhance their digital security should choose a biometric provider that designs its products to ensure that all users are treated ethically and fairly.

_________________________________________________________________________________________________________

[1] https://www.wyden.senate.gov/news/press-releases/wyden-colleagues-urge-ftc-to-investigate-idme-for-deceptive-statements-about-facial-recognition

[2] https://finance.yahoo.com/news/id-comments-adherence-federal-rules-120000401.html

[3] https://www.linkedin.com/feed/update/urn:li:activity:6892131524746326016/

[4] https://www.businessinsider.com/id-me-customer-service-workers-hiring-secuirty-privacy-stress-data-2022-6

[5] https://www.nytimes.com/2020/05/28/technology/clearview-ai-privacy-lawsuit.html

[6] https://www.nytimes.com/2020/05/28/technology/clearview-ai-privacy-lawsuit.html

[7] https://www.nytimes.com/2022/05/09/technology/clearview-ai-suit.html

[8] https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html

[9] https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html

[10] https://www.jdsupra.com/legalnews/picture-this-illinois-federal-court-5185653/

Grace DeFries

Recent Posts

CHEAP FAKES ARE NOW DEEP FAKES

By Jeff Scheidel, VP of Sales One rite of passage for baby boomers was faking…

2 months ago

Sacre Bleu! Half Of France Is Breached By The Simplest Hack

How 33 Million Identities Could Have Been Spared The Risk By Jeff Scheidel, VP of…

3 months ago

Robots Are Stealing My Stuff

How The Threat Of AI Is Now, Not Later, And The Way To Fight It…

3 months ago

When It Comes To Security And Ease-Of-Use, Don’t Forget The Average Worker

By Jeff Scheidel, VP of Sales. Most often when people speak of cyber-security and user…

4 months ago

It’s Official – authID Is Among The Leaders In Biometrics

By Jeff Scheidel, VP of Sales. This past month, FindBiometrics and Acuity Market Intelligence published…

5 months ago

Can We Talk? I Only Need 700 Milliseconds of Your Time

By Jeff Scheidel, VP of Sales. Sure, I’m old, but everybody I hang with complains…

7 months ago