facial recognition

Clearview AI scraped over 30 billion photos from social media without users’ knowledge

Kyle Chua

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Clearview AI scraped over 30 billion photos from social media without users’ knowledge
The scraped images are part of the firm’s growing facial recognition database that’s used by law enforcement agencies across the US

Clearview AI, a controversial firm providing law enforcement in the United States with facial recognition software, scraped more than 30 billion photos from social media to beef up its database, and it did so without users’ knowledge.

CEO Hoan Ton-That disclosed the statistic in an interview with BBC in late-March , adding US police had run nearly a million searches.

Scraping describes the practice of automatically extracting data online, typically using scraper bots. In Clearview AI’s case, the firm scraped photos from Facebook, among other social media sites, to further grow the data available to its facial recognition software, which it claims can help identify suspects of a crime or exonerate those who’ve been wrongfully accused.

Ton-That defended the practice, saying in a statement to Insider, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

“Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public,” he added. “Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Must Read

Ukraine has started using Clearview AI’s facial recognition during war

Ukraine has started using Clearview AI’s facial recognition during war

Critics, however, think Clearview AI is committing data privacy violations.

“Clearview is a total affront to peoples’ rights, full stop, and police should not be able to use this tool,” Caitlin Seeley George, director of campaigns and operations for Fight for the Future, a nonprofit digital rights advocacy group, told Insider.

Matthew Guariglia of the Electronic Frontier Foundation, another nonprofit defending digital privacy rights, meanwhile, warns the public that the technology essentially puts people in a “perpetual police line-up.”

“Whenever they have a photo of a suspect, they will compare it to your face,” he told BBC. “It’s far too invasive.”

Guariglia further warns that even social media users who are aware of Clearview AI’s practices and don’t make their photos public aren’t safe from the firm’s reach. The software supposedly recognizes people anywhere on the web. That means, someone can just be in the background of a friend’s wedding photo, and their face can already be added in the software’s database.

Facebook has policies against data scraping, and violating them can result in being banned on the platform.

“Clearview AI’s actions invade people’s privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services,” wrote a Meta spokesperson in an email to Insider.

Even with Facebook’s policies against scraping though, Guariglia says users can still find themselves in Clearview AI’s database, which means privacy “is a team sport.”

Clearview AI reportedly said it works with more than 3,100 US agencies, including the FBI and Department of Homeland Security. BBC’s report said the Miami Police, for example, admitted to using the technology against all kinds of crimes, from shoplifting to murder cases. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!