Though the federal government widely uses facial recognition for various uses from criminal investigations to collecting traveler data, this use is largely unmonitored and unmanaged — a scenario that must change to protect people’s privacy and avoid inaccurate identification of perpetrators, a government watchdog report has found.
A recent report (PDF) by the United States Government Accountability Office (GAO) surveyed 42 federal agencies that employ law-enforcement officers and found that 20 of them — many of which are not typically involved in criminal investigations — either use their own facial-recognition systems or use systems owned and maintained by other agencies or companies, such as Clearview AI or Amazon’s Rekognition.
Of the 20 agencies, only the Department of Veterans Affairs Police Service, the FBI and NASA’s Office of Protective Service have proprietary facial-recognition systems, while the others either piggyback on these systems or use others, according to the GAO.
However, a number of these agencies don’t always know how facial-recognition technology is being used, nor do they know what third parties are collecting the data or how it’s managed later, the report found. This is especially concerning as the technology itself has faced numerous challenges — some even from federal lawmakers themselves — in terms of trying to limit its use even for legitimate purposes.
“Thirteen federal agencies do not have awareness of what non-federal systems with facial-recognition technology are used by employees,” according to the GAO. “These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy.”
Targeted Uses for Facial Recognition
Most of the government’s use of facial-recognition technology tends to serve a specific purpose — the U.S. Customs and Border Protection’s scanning of travelers’ faces when they enter the United States at airports, for example — while some applications are a bit more targeted and associated with what the feds consider criminal investigations.
The U.S. Capitol Police, for example, are using Clearview AI in their investigation of rioters involved in the Jan. 6 breach of the Capitol, the GAO found. The Capitol Police also used the facial-recognition technology of the Montgomery County Department of Police in Maryland to investigate protesters who have gathered outside of the White House.
Six agencies also reported using the technology on images of the Black Lives Matter protests and ensuing riots following the death of George Floyd in May 2020, according to the report. In fact, police departments across the country came under fire for using facial recognition to arrest allegedly violent BLM protesters after the protests were long over. These actions ultimately led to the proposal of a Senate bill that aimed to extend nationwide some of the restrictions already mandated by states on the collection of facial-recognition information.
Other use of federal facial recognition is aimed at convenience, as in the case of response to COVID-19 pandemic public-health recommendations, the report found. The Administrative Office of the U.S. Courts, Probation and Pretrial Services used the technology in a voluntary program allowing people ordered to stay at home by a court order to verify their identity via a mobile app rather than by having physical contact with a probation or pre-trial officer.
While uses like these are not in and of themselves a violation of people’s privacy, a problem can arise because agencies aren’t always clear on how they are using facial-recognition technology, nor even know who is collecting and managing the data, according to the report.
“All 14 agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities,” the report found. “However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public.”
Controversy Continues over Biometric Privacy
Indeed, facial recognition has always been a controversial technology and an especially sore subject with privacy experts, leading to government regulation and court challenges to the collection of this type of biometric data.
In fact, one of the companies cited in the report as a partner of federal agencies, Clearview AI, has particularly raised ire as well as legal challenges. The New York-based company’s collection of facial-recognition data, which it call “faceprints,” was even ruled illegal earlier this year by a joint investigation of privacy authorities led by the Office of the Privacy Commissioner of Canada for violating federal and provincial privacy laws. The company was also sued last year by the ACLU. And, last November, the L.A. Police Department banned its use.
The Canadian ruling could set a precedent on other legal proceedings against Clearview AI as well as dictate how the use of facial-recognition technology is legislated and monitored in general, highlighting the need for U.S. government agencies to get their own use of the technology in order.
For its part, the GAO made two recommendations to each of 13 federal agencies who don’t have monitoring systems for their use of facial recognition in place “to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems,” according to the report. All of the 13 agencies except the U.S. Postal Service agreed with the recommendations; the USPS agreed with one and partially agreed with the other, according to the report.
Join Threatpost for “Tips and Tactics for Better Threat Hunting” — a LIVE event on Wed., June 30 at 2:00 PM ET in partnership with Palo Alto Networks. Learn from Palo Alto’s Unit 42 experts the best way to hunt down threats and how to use automation to help. Register HERE for free!