NYC – With more transactions occurring online – and subsequently, the number of data breaches increasing – biometrics are moving to the forefront in discussions as a top way to authenticate data securely.
However, challenges remain. The method is not yet being widely utilized by consumers or enterprises – and for those who are using identification via fingerprint, voice, eye scan or facial recognition, security risks still exist.
Maja Pantic, computing and research director at the Samsung AI Research Center, said at the WSJ Cyber Security Forum in NYC on Tuesday that same emerging models that are making biometrics possible are also being utilized to potentially disrupt the security model behind them.
“These models are powerful tools we can use to generate new data… but also fake data,” she said.
The Rise of Biometrics
Ellen Richey, vice chairman and head of risk and public policy at Visa, said that the company, which processes half a billion transactions a day, has seen a marked shift in how consumers buy products.
In today’s digitized world, customers have moved to purchasing goods online – meaning that their data are open to widescale breaches. Security experts and credit-card companies have looked to biometrics as a potential solution to this issue.
“Transactions have moved online, where ‘something you have’ doesn’t work – now it’s ‘something you know,’ also known as passwords,” Richey said. “Then what happened was the mass proliferation of data breaches. We have to solve for online authentication with something different than ‘what you have’ and ‘what you know,’ that is ‘what you are’ – or biometrics.”
Making matters easier, previously companies needed a lot of data to recognize people – but the advent of using machine learning has helped boost facial recognition and other biometrics applications.
Security Challenges Remain
Despite its promise, issues still exist when it comes to biometrics, panelists noted.
One type of technique, dubbed “Deepfake,” is an artificial intelligence-based human image synthesis technique. Typically used to create fake pornographic videos or fake news, Deepfake poses a risk to biometrics as it can also potentially be used to create fake profiles, said Pantic.
“People can use this for new voice profiles to trick the system for ID,” said Pantic. “One issue is that when people use the data for generating new data, they base it on this data they already have. That means you can produce profiles very close to existing profiles – and can create profiles.”
At Black Hat 2018, researchers released a slew of PoCs showing how voice authentication can be bypassed. One of these consisted of identifying a target and harvesting about 10 minutes of high-quality audio samples of the victim via public sources such as YouTube, in order to create a fake voice profile.
Making matters worse, Pantic said that the industry is still very far away from technology and knowledge needed to defeat Deepfake techniques.
The Best Biometric Practices
How can companies implement biometrics while still protecting themselves? The answer is to remember that “there is no silver bullet,” said Richey.
For instance, Visa uses various forms of biometrics that take into account behavior such as the way a user holds the mouse or phone; and the platform inspects the data around the transactions themselves. That means looking at whether customers have purchased products at a website before, or where they live (based on opt-in data collection methods).
“In our security strategy we never rely on just one thing,” she said. “You can use biometrics, but also use other methods like behavioral characteristics.”