The FBI isn’t saying whether it is using facial recognition technology to identify suspects involved in the Capitol attack on Wednesday. Neither are any of the other seven agencies CNET reached out to over the weekend. Even so, facial recognition app Clearview AI has confirmed a spike in searches of its database used by law enforcement.
Asked whether the bureau is using either Clearview’s services or its own facial recognition tools, the FBI skirted the question.
“While the FBI does not comment on investigations, as a matter of course, we will utilize a number of different investigative tools to pursue any lead that will further our investigations,” the bureau told CNET in an email.
The bureau also continues to encourage anyone with information, including photos and videos from the events at the Capitol, to submit them via its online portal. That includes local police forces, some of which have reportedly submitted leads based on Clearview searches.
Debate continues over the broader use of facial recognition by authorities, a controversy spurred in part by lawsuits over the misidentification of criminal suspects, and new municipal prohibitions aimed at protecting the privacy of those participating in First Amendment activity such as the Black Lives Matter protests of 2020.
Clearview has been the subject of several lawsuits over its data collection, and has previously been hit with cease-and-desist orders from Facebook, Twitter and YouTube for scraping user images. As CNET’s Queenie Wong wrote, though, Facebook, Twitter and Instagram have also been discovered feeding user data to a law enforcement monitoring tool called Geofeedia. All three social networks shut off access to the data after the American Civil Liberties Union raised alarms that the tool could be used to target activist hashtags and the neighborhoods of people of color.
The two facial recognition programs under the FBI’s Criminal Justice Information Services Division, per the bureau’s 2019 testimony, are the Facial Analysis, Comparison, and Evaluation (FACE) Services Unite and the Next Generation Identification (NGI) System. The FBI’s database contains at least 641 million images of US citizens and is one of the largest, but — with more than 3 billion photos used by over 2,400 law enforcement agencies — Cleaview’s database dwarfs it.
The bureau’s comments differ from those in a Jan. 7 story from NBC, which reported that FBI intelligence analysts were sorting through more than 4,000 online tips, including photos and videos of suspects rioting, and that investigators were also employing facial recognition software to identify suspects.
In an email to CNET, Clearview confirmed CEO Hoan Ton-That’s remarks that database searches spiked 26% over usual weekday search volume on the day of the attacks, as originally reported by The New York Times.
Read more: Facial recognition’s fate could be decided in 2021
Facial recognition used on protesters
When Black Lives Matter protests spread across US cities in the summer of 2020, the FBI’s use of facial recognition to surveil peaceful protesters became a flashpoint in public safety debates. Even as cities across the US led legislative efforts to ban facial recognition, civil rights advocacy groups and privacy-minded lawmakers were confronted with pushback from federal agencies who argued the technology was crucial to preserving public safety.
The Drug Enforcement Agency was temporarily authorized to surveil Black Lives Matter protests last summer and has been known to use facial recognition databases, but was sued in 2019 by the ACLU in a bid to uncover the extent of its use of facial recognition. When CNET asked whether it was using facial recognition to investigate the Capitol attacks, the DEA again declined to comment.
The DHS has likewise been monitoring social media use by members of the movement since protests began in Ferguson, Missouri, and Immigration and Customs Enforcement used facial recognition to search driver-licence databases. Neither agency has so far responded to CNET’s request for comment
As reported by the Washington Post, US Capitol Police are now facing a lawsuit over the use of its new National Capital Region Facial Recognition Investigative Leads System during the Lafayette Square protests that took place in June of 2020. Court documents revealed that 14 agencies have access to the system’s database of 1.4 million people, and has been used more than 12,000 times since 2019.
When CNET asked US Capitol Police whether the new system is being used to investigate Capitol attacks, the agency didn’t immediately respond. In a Jan. 7 statement, though, Capitol Police Chief Steven Sund said “the USCP is continuing to review surveillance video and open source material to identify others who may be subject to criminal charges.”
Last summer, when some members of Congress demanded the agencies cease the use of facial recognition to surveil Black Lives Matter protests, the FBI defended its surveillance activity.
“Our efforts are focused on identifying, investigating, and disrupting individuals that are inciting violence and engaging in criminal activity,” the FBI said in a June 2020 email to CNET. “The FBI does not conduct surveillance based solely on First Amendment protected activity.”
Read more: Facial recognition has always troubled people of color. Everyone should listen
Ineffective identification
While some have questioned whether facial recognition concerns are made moot by the widespread use of facemasks, CNET sister publication ZDNet reports that recent DHS pilot technology has been successful in seeing through masks. The DHS claimed it was able to use AI systems to correctly identify 93% of unmasked individuals and 77% of masked individuals on average.
Even with the ability to see through masks, however, facial recognition often doesn’t accurately identify its subjects. The DHS’ results varied greatly from one system to the other, with best-performing technology reaching 96% accuracy even on masked subjects, and worst-performing systems reaching only 4%.
The results echo those offered by the National Institute of Standards and Technology in a 2019 report, which found facial recognition algorithms consistently misidentified people of color more often than white people. It used federal data sets containing roughly 18 million images of over 8 million people to evaluate a majority of the facial recognition industry — 189 software algorithms from 99 developers. The NIST report followed on the heels of a 2018 research paper that brought algorithmic biases to light, titled Gender Shades.
Despite its flaws, some have still managed to use facial recognition to successfully target people based on race. Microsoft’s facial recognition tech was linked to the Chinese government’s tracking of ethnic Muslim groups. Microsoft then proffered the tech to the DEA, though it stopped following IBM and Amazon’s withdrawal from the facial recognition space.
Arrests rising from the use of facial recognition have likewise become targets for civil rights lawsuits against federal agencies and many cities across the US have banned facial recognition. Even so, it’s still in play.
Clearview has denied its facial recognition contributes to racial misidentification.
“As a person of mixed race, this is especially important to me,” Ton-That said in a June 2020 statement. “We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of color.”
Legislative efforts have risen in the past year opposing the use of facial recognition technology, like the Democrat-backed Facial Recognition and Biometric Technology Moratorium Act, which sought to place a moratorium on law enforcement use of the technology until Congress could pass a law lifting the current ban.
The ACLU opposes the use of Clearview’s facial recognition to identify suspects in the Capitol attacks.
“A company that threatens to destroy privacy as we know it can’t restore its reputation this easily. Face recognition technology is unregulated by federal law, but its contribution to racist false arrests of Black people, its use to identify protesters demanding racial justice, and its potential for mass surveillance of communities of color have rightly led state and local governments across the country to stop its use by law enforcement,” the ACLU said in an emailed statement to CNET.
“If law enforcement use of face recognition technology is allowed to be normalized, we know who it will be used against most: members of Black and Brown communities who already suffer under a racist criminal enforcement system.”