Microsoft Joins Ban on Sale of Facial Recognition Tech to Police

Microsoft has joined Amazon and IBM in banning the sale of facial recognition technology to police departments and pushing for federal laws to regulate the technology.

Microsoft is joining Amazon and IBM when it comes to halting the sale of facial recognition technology to police departments. In a statement released Thursday by Microsoft President Brad Smith, he said the ban would stick until federal laws regulating the technology’s use were put in place.

“We will not sell facial recognition tech to police in the U.S. until there is a national law in place… We must pursue a national law to govern facial recognition grounded in the protection of human rights,” Smith said during a virtual event hosted by the Washington Post.

On Wednesday, Amazon announced a one-year ban on police departments using its facial recognition technology. In a short statement the company said it would be pushing for “stronger regulations to govern the ethical use of facial recognition technology.”

The actions by both tech behemoths dovetail actions by IBM earlier this week. In a statement by IBM’s new CEO Arvind Krishna, he said that it will no longer offer general purpose facial recognition or analysis software “for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”

Krishna’s statements were part of a letter to Congress where he advocated policy reviews such as “police reform, responsible use of technology, and broadening skills and educational opportunities.”

The moves align with a broader demand for law enforcement reforms and calls for racial justice by social justice activists in the wake of the death of George Floyd by Minneapolis, Minnesota police and the weeks of protests that followed.

“It should not have taken the police killings of George Floyd, Breonna Taylor, and far too many other black people, hundred of thousands of people taking to the streets, brutal law enforcement attacks against protesters and journalists, and the deployment of military-grade surveillance equipment on protests led by black activists for these companies to wake up to the everyday realities of police surveillance for black and brown communities,” said Matt Cagle, technology and civil liberties attorney with the American Civil Liberties Union of Northern California in a statement to NBC News this week.

Boom in Technology Prompts Privacy Alarms

The debate over the use of facial recognition has been simmering for years. Big questions about privacy, civil rights and civil liberties have been raised by the American Civil Liberties Union (ACLU), Surveillance Technology Oversight Project and the Electronic Frontier Foundation (EFF).

Objections to police use of facial recognition include a lack of consent by citizens to have their biometric profiles captured by law enforcement agencies. Civil liberties activists argue the technology is imperfect and could lead to a mistaken detainment or arrests. The EFF cites a 2012 FBI study (.pdf) that found the accuracy rates of facial recognition and African Americans were lower than for other demographics.

“Face recognition can be used to target people engaging in protected speech. For example, during protests surrounding the death of Freddie Gray, the Baltimore Police Department ran social media photos through face recognition to identify protesters and arrest them,” the EFF wrote.

In March, the ACLU filed a suit against the Department of Homeland Security (DHS) over its use of facial recognition technology in airports, decrying the government’s “extraordinarily dangerous path” to normalize facial surveillance as well as its secrecy in making specific details of the plan public.

Currently, 22 airports are using what is called the Traveler Verification Service (TVS), which as of June 2019 had scanned the faces of more than 20 million travelers entering and exiting the country, the ACLU said. Several major airlines, including Delta, JetBlue and United Airlines, have already partnered with U.S. Customs and Border Protection to build this surveillance infrastructure, while more than 20 other airlines and airports have committed to using CBP’s face-matching technology.

Facial Recognition has also come under fire as it relates to the technology’s use globally to track the spread of coronavirus. The technology is seen as a zero-contact solution for identifying and tracking individuals exposed to someone infected with COVID-19.

Hawaii’s KHON2 News reported Thursday that the U.S. Department of Transportation is behind a test of facial recognition technology at Honolulu’s international airport. It reported that “facial recognition will be tested along with thermal temperature scanning… in the next couple of weeks.”

Political Prospects of Change

Staunch privacy advocate U.S. Sen. Ron Wyden, on Thursday, urged the Trump administration to stop “weaponizing” facial recognition technology against protesters. In a letter co-signed by U.S. Sens. Cory Booker and Sherrod Brown to Attorney General William Barr and the Department of Homeland Security, Wyden chided federal law enforcement for its use of facial recognition technology on the peaceful protesters marching against the police killing of George Floyd.

“Advances in facial recognition technologies should not be weaponized to victimize Americans across the nation who are standing up for change,” Wyden wrote. “It is no secret that Clearview AI’s controversial facial recognition tool is used by law enforcement throughout your departments despite the numerous legal challenges it faces. However, scientific studies have repeatedly shown that facial recognition algorithms are significantly less accurate for people with non-white skin tones.”

One legal victory came last September when California lawmakers passed a bill to ban the use of facial recognition-equipped cameras by law enforcement. Meanwhile, a number of legal challenges attempt to slow the widespread use of the technology.

Last month the ACLU sued New York-based facial-recognition startup Clearview AI for amassing a database of biometric face-identification data of billions of people and selling it to third parties without their consent or knowledge. The complaint, filed in Circuit Court of Cook County in Illinois, accused the company of violating an Illinois law that protects people “against the surreptitious and nonconsensual capture of their biometric identifiers.”

Clearview AI founder, Hoan Ton-Thatand, has defended his company’s practices and intentions. He said he welcomes the privacy debate, stating in various published reports that the technology is meant to be used by law enforcement to help solve crimes and not to violate people’s privacy.

Whether or not Microsoft, Amazon and IBM have the market might and political capital to force new regulations is unclear. Meanwhile, the EFF reminds that the list of facial recognition vending firms  is long – including 3MCognitecDataWorks PlusDynamic Imaging SystemsFaceFirst, and NEC Global.

FREE Webinar: Are you on top of the shifting insider threats within your business? On June 24 at 2 p.m. ET, join Threatpost and our panel of experts for a complimentary webinar, The Enemy Within: How Insider Threats Are Changing.” Get exclusive insights on how remote working has increased the risk of insider threats, and how to gain visibility into employee behavior while striking the right balance between privacy and ease of use. Please register here for this webinar.

Suggested articles

It’s Not the Trump Sex Tape, It’s a RAT

Criminals are using the end of the Trump presidency to deliver a new remote-access trojan (RAT) variant disguised as a sex video of the outgoing POTUS, researchers report.