Jump to content

Secretive face-matching startup has customer list stolen

Recommended Posts


oops —

Secretive face-matching startup has customer list stolen

Losing data to an intruder is not a great look for a law enforcement partner.

A video surveillance camera hangs from the side of a building on May 14, 2019, in San Francisco, California.
Enlarge / A video surveillance camera hangs from the side of a building on May 14, 2019, in San Francisco, California.

Clearview, a secretive facial-recognition startup that claims to scrape the Internet for images to use, has itself now had data unexpectedly scraped, in a manner of speaking. Someone apparently popped into the company's system and stole its entire client list, which Clearview to date has refused to share.


Clearview notified its customers about the leak today, according to The Daily Beast, which obtained a copy of the notification. The memo says an intruder accessed the list of customers, as well as the number of user accounts those customers set up and the number of searches those accounts have conducted.


"Unfortunately, data breaches are part of life in the 21st century," Tor Ekeland, an attorney for Clearview, told The Daily Beast. "Our servers were never accessed. We patched the flaw and continue to work to strengthen our security."


Clearview vaulted from obscurity to the front page following a report by The New York Times in January. The paper described Clearview as a "groundbreaking" service that could completely erode privacy in any meaningful way.


The company at the time claimed to have in place 600 agreements with law enforcement agencies to use its software, which allegedly aggregated more than 3 billion facial images from other apps, platforms, and services. Those other platforms and their parent companies—including Twitter, Google (YouTube), Facebook (and Instagram), Microsoft (LinkedIn), and Venmo—all sent Clearview cease and desist letters, claiming its aggregation of images from their services violates their policies.


Clearview, which stresses its service is "available only to law enforcement agencies and select security professionals," refused repeatedly to share client lists with reporters from several outlets. Reporters from The New York Times and BuzzFeed both dove into several of the company's marketing claims and found some strong exaggerations. Clearview boasts that its technology helped lead to the arrest of a would-be terrorist in New York City, for example, but the NYPD told BuzzFeed Clearview had nothing to do with the case.


In the face of public criticism, the company made exactly two blog posts, each precisely two paragraphs long. The first, under the subject line "Clearview is not a consumer application," insists, "Clearview is NOT available to the public," emphasis theirs. It adds, "While many people have advised us that a public version would be more profitable, we have rejected the idea."


Four days later, the company added another post, stressing that its code of conduct "mandates that investigators use our technology in a safe and ethical manner." While "powerful tools always have the potential to be abused," the company wrote, its app "has built-in safeguards to ensure these trained professionals only use it for its intended purpose."


Clearview did not at any point say what these safeguards might be, however, nor has it explained who qualifies as "select security professionals."


Other companies that partner with law enforcement for surveillance technologies have also not always been successful in attempts to keep their client lists on the down-low. Amazon, for example, attempted just that with its Ring line of products. After repeated media reports tried to draw out the details, however, Ring finally went public with a list of 405 agencies last August and through February 13 at least has kept updating the list of those (now 967) deals.



Source: Secretive face-matching startup has customer list stolen (Ars Technica)  

Share this post

Link to post
Share on other sites

Clearview AI, The Company Whose Database Has Amassed 3 Billion Photos, Hacked


By  Kate O'Flaherty




Clearview AI, the company whose database has amassed over 3 billion photos, has suffered a data breach, it has emerged. The data stolen in the hack included the firm’s entire customer list–which will include multiple law enforcement agencies–along with information such as the number of searches they had made and how many accounts they’d set up. 


Clearview AI said the huge database of images was not part of the breach. The firm’s attorney, Tor Ekeland, said in statement to the Daily Beast that security “is Clearview’s top priority.”


"Unfortunately, data breaches are part of life in the 21st century,” Ekeland said. “Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security."

Clearview AI: Already a privacy concern

Clearview AI rose to fame last month, when the New York Times detailed how the company’s facial recognition program had scraped sources including Facebook and Twitter to build its massive database.


Used by law enforcement agencies including the FBI, Clearview AI is now coming under increasing scrutiny. A lawsuit was filed against the firm in Illinois, alleging that Clearview’s actions are a threat to civil liberties.


Meanwhile, reports are emerging that Police have been stopped from using Clearview AI, and Twitter and others have sent cease and desist letters to the company. 


I also reported last month that if you want your image removed from Clearview’s database, you have to prove who you are by providing a headshot and a photo of your government issued ID.


Although Clearview says it helps prevent crime, its technology is only accurate around 75% of the time–and even that claim cannot be proven. Adding to this, because facial recognition matching is all about probabilities, the sheer breadth of Clearview AI’s database makes it much harder to be accurate. 

The Clearview AI data breach: Does it matter?

Some people will be relieved the breach was not Clearview’s facial recognition database but others, such as the firm’s customers, won’t be happy at all. 


As Jake Moore, cybersecurity specialist at ESET says: “Data breaches might be part of life in the 21st century but we need to make sure the severity is kept to a minimum and the data exposed is heavily encrypted. Any data breach is serious and should not be taken lightly. If the data exposed had included faces, it would have taken this to the next level.”


He adds: “Companies which hold extremely sensitive data such as facial identities need to understand they are a higher profile risk and need even more layers of protection to thwart these inevitable attacks.”


Following the New York Times story and associated fallout, Clearview AI’s reputation was already tarnished. This breach, which some might see as a warning signal to all that store facial recognition data, could make things a lot worse.



Edited by steven36
Fixed Format

Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Create New...