Racial bias against non-white skin in facial recognition landed Nijeer Parks ten days in jail in 2019 after the technology falsely identified him as a shoplifting suspect, a new lawsuit says.
It didn’t matter that he hadn’t been to the location of the crime, a Hampton Inn hotel in Woodbridge, New Jersey, according to Parks. The tech fingered him and that was enough for police, he said. A warrant was issued, and Parks had his cousin drive him to the station to explain they had the wrong guy.
“I had no idea what this was about,” Parks told NJ Advance Media. “I’d never been to Woodbridge before, didn’t even know for sure where it was.”
That didn’t matter, according to Parks, who said once he got to the station he was handcuffed and thrown in jail for ten days. The charges against him were later dismissed and prosecutors admitted the only evidence against Parks was from the department’s Clearview AI facial recognition technology.
“I did have a background, but I’d been home since 2016 and I had been in no trouble,” Parks said. “The whole thing scared the heck out of me. I’ve been trying to do the right thing with my life.”
Now Parks is suing the police department for locking him up based on nothing more than this faulty technology — and he is not alone.
Parks joins Robert Julian-Borchak Williams, who was also wrongly arrested earlier this year, this time in Detroit, for allegedly stealing watches in 2018, after also being misidentified by facial recognition technology. This time it was DataWorks Plus facial recognition software being used by Michigan State Police that failed.
Williams was arrested in front of his wife and children and held overnight in the Detroit Detention Center until he was led to an interrogation room and shown video of the crime, Business Insider reported. That’s when they finally compared the suspect with Williams and it was obvious police had the wrong guy.
Racial Bias in Facial Recognition Software
About half of American adults, without their knowledge or consent, are included in the law enforcement facial recognition database, according to research from Harvard. This presents basic privacy concerns for all Americans.
But it’s Black Americans who face the greatest threat of injustice, according to Harvard’s Alex Najibi, who explained in a report from October that while overall accuracy rates for facial recognition tech hover around 90 percent, error rates across different demographics vary. The “poorest accuracy consistently found in subjects who are female, Black and 18-30 years old,” he wrote.
The idea that facial recognition software is racist isn’t anything new. The 2018 Gender Shades project, and an independent assessment by the National Institute of Standards and Technology have all come to the same conclusion: facial recognition technology is least accurate on Black Americans and Black women in particular.
Phasing Out Faulty Facial Recognition
Last November the Los Angeles Police Department banned the Clearview AI facial recognition platform after personnel were revealed to have been using the database, citing privacy concerns and under pressure from watchdog groups like the American Civil Liberties Union (ACLU).
“[Clearview AI] has captured these faceprints in secret, without our knowledge, much less our consent, using everything from casual selfies to photos of birthday parties, college graduations, weddings and so much more,” ACLU staff attorney Nathan Freed Wessler wrote about the lawsuit last May.
“Unbeknownst to the public, this company has offered up this massive faceprint database to private companies, police, federal agencies and wealthy individuals, allowing them to secretly track and target whomever they wished using face-recognition technology.”
Last summer, the National Biometric Information Privacy Act was introduced in the Senate to put privacy protections in place, but until the law catches up, tech giants like Microsoft, Amazon and IBM have pledged to stop selling facial recognition to police departments.
“We will not sell facial-recognition tech to police in the U.S. until there is a national law in place…We must pursue a national law to govern facial recognition grounded in the protection of human rights,” Microsoft president Brad Smith said about the announcement.
Clearview CEO Hon Ton-That defended his company’s product in a statement provided to Threatpost last September and pointed to its use by more than 2,000 law enforcement agencies to solve crimes and keep communities safe.
“Clearview AI is proud to be the leader in facial-recognition technology, with new features like our intake form — whereby each search is annotated with a case number and a crime type to ensure responsible use, facial-recognition training programs and strong auditing features.”
Download our exclusive FREE Threatpost Insider eBook Healthcare Security Woes Balloon in a Covid-Era World , sponsored by ZeroNorth, to learn more about what these security risks mean for hospitals at the day-to-day level and how healthcare security teams can implement best practices to protect providers and patients. Get the whole story and DOWNLOAD the eBook now – on us!