by Vittorio Bollo
If, as the saying goes, the eyes are the windows to the soul, then the face is the window to the modern surveillance of people. Today, facial recognition is one of the leading methods of biometric recognition and is a rapidly-growing biometric technology. It is projected that it will be a $7.7 billion industry by 2022.
One of the leading companies in the facial recognition technology game, Clearview AI, has come under immense scrutiny in recent months. What is this technology that is being aggressively promoted by Clearview AI, particularly in the law enforcement and counter-terrorism fields? To what extent are the concerns about this fast-emerging technology valid? What is the consensus, both for and against, regarding facial recognition generally, specifically in the legal/legislative domain?
What Is Clearview AI?
Clearview AI is the brainchild of Hoan Ton-That, an Australian-born techie of Vietnamese descent who moved to San Francisco at the age of 19. He launched the Manhattan-based start-up Clearview AI in 2017, together with Richard Schwartz, a former adviser to Rudy Giuliani when the latter was the Republican mayor of New York City. Financial backing was provided by conservative venture capitalist and early Facebook investor, Peter Thiel.
Clearview is indeed groundbreaking, as explained in a recent exposé about the company by The New York Times: namely, a picture can be taken by a person, say, walking in the street, uploaded via the software, and all the publicly available photos of that person, including links to where said photos appeared online, will be made instantly available. No other facial recognition technology can claim to do the same – not for now, at least. Little wonder that the article was titled, “The Secretive Company That Might End Privacy as We Know It.”
How does Clearview achieve this? Via its database of more than three billion images that it’s said to have scraped from Facebook, YouTube, Instagram, Twitter, and other websites. It should be noted that many of these companies prohibit scraping from their sites. The New York Times reported that Twitter had in fact informed Clearview (jn the form of a cease and desist letter) that its facial recognition app was in violation of Twitter policies. Clearview further received a cease-and-desist letter from the attorney general of New Jersey.
Not surprisingly, various federal and state authorities, particularly those in law enforcement, have expressed great interest in what Clearview can achieve in the investigation and prosecution of everything from identity theft and credit card fraud, to murder and child sexual abuse.The New York Times has estimated that more than 600 law enforcement agencies have been secretly using the technology since early 2019, which Clearview declined to list when so asked.
Clearview proclaims a simple yet bold mission statement on its website: “Clearview is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.” Its claims are even bolder: “Clearview’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.”
The Clearview website includes a glowing testimonial by an (unnamed) “Detective Constable in Canada’s Sex Crimes Unit,” who states, “Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. Within a week and a half of using Clearview, [we] made eight identifications of either victims or offenders through the use of this new tool”.
What Are the Concerns Regarding Clearview’s Technology?
The app’s ability to undermine privacy is potentially immense. For example, it’s believed that the app’s underlying programming language could be paired with augmented reality (AR) glasses, thereby allowing a wearer (for example, an anti-riot police officer at a political rally) to potentially identify every person they saw. The information revealed would be highly personal, including what a person’s interests might be, what they have seen and bought online, and even where they live. Many critics, including the San Francisco Chronicle, the American Civil Liberties Union (ACLU), and other noted privacy academics, argue that this technology is highly invasive.
On the positive side, there are concrete cases of law enforcement having successful convictions due to use of the Clearview app. For example, the Indiana State Police solved a case in which two men had gotten into a fight in a park and one shot the other in the stomach. The shooter had been recorded on someone’s phone, but he couldn’t be identified using existing police databases since he’d never been arrested, nor did he have a driver’s license. The shooter was identified within 20 minutes of Indiana police using the app.
The app made a match based on a video posted on social media in which the shooter appeared, and his name was included in a caption for the video. An Indiana State Police captain, Chuck Cohen, believed the shooter probably wouldn’t have been identified had the Clearview app not been used. Detective Sgt. Nick Ferrara of Gainesville, FL, had similar praise for the app, which he found was far more accurate than the state-provided facial recognition tool (called FACES) he had previously used. Ferrara noted how, “With Clearview, you can use photos that aren’t perfect. A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.” Coincidentally, Florida’s FACES recognition tool has been at the center of a lawsuit in that state regarding the tool’s problematic (and often incorrect) algorithm.
It could also be argued that the sheer size of the Clearview database is hugely beneficial to law enforcement and anti-terrorism agencies: its 3-billion haul of images makes all other databases pale in comparison, including those of the LAPD (8 million images), Florida State Police (47 million images), and even the FBI (411 million images).
On the negative side, besides privacy issues, another concern is about Clearview itself. The incisive reportage by The New York Times gave almost creepy insight into what its investigators went through during their investiagtion of the hyper-secretive company. This included the article’s lead writer finding out that a number of police officers had run his photo through the Clearview app and revealed that they had received inquiries from Clearview’s representatives regarding any ‘media interest’ in the company, which said writer found especially chilling.
It also turned out that the company’s Manhattan address was bogus and that even Ton-That had used a faked name on the company’s LinkedIn profile. Why so much secrecy and even subterfuge? Why go to those lengths? What may seem minor points may reveal more about the company’s ethos than it cares to do.
Ton-That himself has a somewhat troubling background. He had previously created software such as ViddyHo, a phishing website that tricked users into sharing access to their Gmail accounts, as well as fastforwarded.com, a similar phishing site that tried to steal users’ passwords.
What of Facial Recognition Technology Generally?
Facial recognition has been problematic for some time now. It has even been considered taboo by many tech companies, including Google, itself hardly the paragon of protecting privacy. Already in 2011, Google’s chairman stated how facial recognition was the one technology the company had not ventured into because it could be used “in a very bad way.”
One of the leading concerns with facial recognition technology is that, however accurate it might be, it inevitably delivers false matches. What then? Even that technology tested by an independent party such as the National Institute of Standards and Technology (NIST), can be problematic regarding false positives. This can sometimes occur due to what Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, calls “the doppelgänger effect,” which occurs when similar-looking people are falsely flagged, and which is especially probable the larger a database is.
The Legal Implications of Clearview, et al…
There is no denying that there is push-back afoot against facial recognition technology, and, as is so often the case when disruptive technologies come along, the push-back is primarily legal. Clearview has undoubtedly been the lightning rod for this recent surge in legal push-back against this technology. Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, views Clearview as final proof that facial recognition should be banned in the United States. He is emphatic in what he believes needs to be done: “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
There have been various legal initiatives against facial recognition technology throughout the US. In May 2019, San Francisco became the first city in the country to ban the government use of face-based surveillance. This was followed by neighboring Oakland in June 2019. The following month, the city council of Somerville, MA became the first city on the East Coast to ban the use of the technology in the city. Cambridge, MA voted in a similar ban in January 2020.
States are following suit. In October 2019, California’s Governor Gavin Newsom signed A.B. 1215, a state bill that prohibits the use of face recognition by law enforcement for three years. That prohibition includes face recognition software on body cameras worn by policemen. New Hampshire and Oregon have similar body camera laws in place, with further legislation being tabled by the legislatures of Massachusetts, Michigan, New York, and Washington. Illinois passed a law that permits individuals to sue any entity regarding the non-consented collection and use of a range of their biometric data, including that derived from facial recognition technology.
America is not the only country grappling with the legal consequences of facial recognition technology. In the UK, civil liberties groups have condemned the decision by London’s Metropolitan Police to use facial recognition technology – specifically, live facial recognition (LFR) cameras – as nothing less than “a breathtaking assault on our rights.” The Metropolitan Police declared the surveillance system was 70% effective at spotting wanted suspects, but Professor Pete Fussey, an expert on surveillance from Essex University, disagreed. His independent survey found that the technology was verifiably accurate in only 19% of cases. Fussey bluntly stated, “I stand by our findings. I don’t know how [the London police] get to 70%.”
In the US, a landmark class action lawsuit against Clearview has been launched by David Mutnick, who argues that the company has broken the Illinois Biometric Information Privacy Act (BIPA), a law that requires companies to obtain explicit consent to collect any biometric data. The lawsuit calls for an injunction against Clearview, requiring that it deletes any stored (read: scraped) biometric data held by Clearview of Mutnick and any other residents of Illinois. In America, the class action lawsuit is historically the leading edge against any contentious socially-divisive issue, and so should be taken seriously, even if the plaintiff fails to win this time around.
Final Thoughts on Clearview AI and Facial Recognition Technology
There is no denying that the company known as Clearview AI is highly contentious and legally problematic. The same can be said of all facial recognition technology. However, the issue is multi-faceted and, more importantly, will not simply disappear. On the one hand, there is no denying the privacy and civil rights quagmire that this technology represents. On the other hand, to wish it away would be fundamentally naive because it is already a reality. Even banning it could prove near impossible.
For all its unsettling secrecy, Clearview will likely be transplanted by companies that are bigger, better financed, and a whole lot more powerful. For now, the status quo is not fully Orwellian…yet.
What to suggest? Perhaps the best strategy would be to proceed with this technology with much weariness and extreme caution. Blithely steaming ahead with its use without any checks and balances would be foolish and reckless. This is one emerging technology which surely requires stringent regulation and, perhaps for now, even a moratorium until it is better understood and can be best regulated. Ultimately, nothing less than our individual and collective privacy rights are at stake.