Reports about the potential for biometric scanning to usher in a new age of super snooping have prompted a retort from the industry itself, as one CEO used an interview with InfoSecurity.com this week to throw cold water on research from Carnegie Mellon University and argue that people – not face scanners – are the source of privacy breaches.
In an interview with InfoSecurity.com, Stewart Hefferman, the CEO of facial scanning firm OmniPerception, said that face scanning was getting a bad rap. Taking a page out of the National Rifle Association’s handbook, Hefferman made the argument that ‘face scanners don’t snoop on people…people snoop on people.’
“Properly used, [facial recognition] is absolutely non-threatening; and delivers huge benefits – improving safety and security in many areas of modern life,” Hefferman told InfoSecurity.com. Rather than blame the technology, researchers should “be focusing on how (it) is being used – and on the people who are using or abusing it,” he said.
Hefferman was reacting to a recent study by researchers at Carnegie Mellon University. As reported by Threatpost, CMU researcher Alessandro Acquisti and fellow researchers presented a paper at the Black Hat Briefings in Las Vegas last week that warned of a tipping point in the development of facial recognition and data mining technology that could enable companies or governments to pick faces out of a crowd and build real time profiles of those individuals by matching their face scan with online images and other publicly available data on social media sites.
“There’s a blending of online and offline data, and your face is the conduit – the veritable link between these two worlds,” Acquisti told Threatpost. “I think the lesson is a rather gloomy one. We have to face the reality that our very notion of privacy is being eroded. You’re no longer private in the street or in a crowd. The mashup of all these technologies challenges our biological expectation of privacy.”
Nonsense, says Hefferman, who likens realtime face scanning and profiling of strangers with the kind of interpersonal face-to-face interactions humans have been engaging in since the dawn of civilization.
“It’s not usually when we recognize people that we feel most vulnerable. It’s when we see balaclavas, hoodies or masks that we know we probably need to watch out. This modern obsession with face recognition as the enemy of privacy is a spurious and thoroughly unhelpful phenomenon”, he told InfoSecurity.com.
Of course, Hefferman’s conflation of the human act of recognizing someone and associating a face with what you, personally, know about them is different from what Acquisti is talking about: the automated, computer-enabled association of an unknown face with previously inaccessible troves of online data from the databases of marketers, retailers, for-profit firms and social networks.
Never mind that Acquisti wasn’t talking about divesting from research in facial recognition technology. He was merely suggesting that The rapid evolution of the technology is moving well ahead of policy protecting privacy and civil rights.
But that distinction is lost on Hefferman who doesn’t fail to note that facial recognition technology can be used to fight terrorism and fraud, too (despite trials that show its less than perfect on either score). “We need to go on investing in the benefits of modern face recognition technology, if we are to bring it ever more effectively to bear in the global battle against terrorism, in fraud prevention and in protecting the safety and security of the public at large”, he noted.