Deepfake Attacks Are About to Surge, Experts Warn

New deepfake products and services are cropping up across the Dark Web.

Artificial intelligence and the rise of deepfake technology is something cybersecurity researchers have cautioned about for years and now it’s officially arrived. Cybercriminals are increasingly sharing, developing and deploying deepfake technologies to bypass biometric security protections, and in crimes including blackmail, identity theft, social engineering-based attacks and more, experts warn.

Time to get those cybersecurity defenses ready.

zoho webinar promo

Join Threatpost for “Fortifying Your Business Against Ransomware, DDoS & Cryptojacking Attacks” a LIVE roundtable event on Wednesday, May 12 at 2:00 PM EDT for this FREE webinar sponsored by Zoho ManageEngine.

A drastic uptick in deepfake technology and service offerings across the Dark Web is the first sign a new wave of fraud is just about to crash in, according to a new report from Recorded Future, which ominously predicted that deepfakes are on the rise among threat actors with an enormous range of goals and interests.

“Within the next few years, both criminal and nation-state threat actors involved in disinformation and influence operations will likely gravitate towards deepfakes, as online media consumption shifts more into ‘seeing is believing’ and the bet that a proportion of the online community will continue to be susceptible to false or misleading information,” the Recorded Future report said.

Like most novel technology, deepfakes’ first incubator was pornography, the report pointed out, but now that it’s ricocheting around the crime-ridden corners of the internet, its development is getting supercharged by seasoned cybercriminals.

Deepfake Tech on the Dark Web

Right now, the researchers said, discussions among threat actors about deepfake products and technologies are largely concentrated in English- and Russian-language criminal forums, but related topics were also observed on Turkish-, Spanish- and Chinese-language forums.

Much of the chatter in these underground forums is focused on how-tos and best practices, according to Recorded Future, which appears to demonstrate a widespread effort across cybercrime to sharpen deepfake tools.

“The most common deepfake-related topics on dark web forums included services (editing videos and pictures), how-to methods and lessons, requests for best practices, sharing free software downloads and photo generators, general interests in deepfakes, and announcements on advancements in deepfake technologies,” the report added. “There is a strong clearnet presence and interest in deepfake technology, consisting of open-source deepfake tools, dedicated forums, and discussions on popular messenger applications such as Telegram and Discord.”

Deepfakes & Malicious Synthetic Media

Last summer, FireEye used the Black Hat USA 2020 event to warn the audience about how widely available open-source deepfake tools are with pre-trained natural language processing, computer vision and speech recognition — just about everything a threat actor might need to develop what they called malicious “synthetic media.” FireEye’s staff scientist Philip Tully said at the time that the world was in the “calm before the storm.”

The storm seems to be brewing just over the horizon.

Experian likewise released a report recently calling synthetic identity fraud the fastest growing type of financial cybercrime.

“The progressive uptick in synthetic identity fraud is likely due to multiple factors, including data breaches, dark web data access and the competitive lending landscape,” the Experian “Future of Fraud Forecast” said. “As methods for fraud detection continue to mature, Experian expects fraudsters to use fake faces for biometric verification. These ‘Frankenstein faces’ will use AI to combine facial characteristics from different people to form a new identity, creating a challenge for businesses relying on facial recognition technology as a significant part of their fraud prevention strategy.”

The rising threat of deepfake technology has been discussed for years. Back in 2019, deepfake artist Hao Li sounded an alarm that AI in the hands of cybercriminals would be a formidable security threat.

“I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Li told Threatpost in the fall of 2019. “We started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.”

There have been a few instances of successful deepfake cybercrimes in the past. In Sept. 2019, cybercriminals created faked audio of a CEO to call his company and ask them to transfer $243,000 to their bank account.

Protecting Against Deepfakes, Synthetic Media

Cyber-expert Brian Foster (now a strategic advisor to Awingu) recently explained that protecting against deepfakes is going to require a drastic re-think of traditional approach. Foster envisioned an automated, zero-trust system that likewise leverages AI and machine learning to analyze multiple security parameters.

“Overall, the more we can automate and use intelligence to accomplish verification processes, the better, Foster advised. “This approach relies less on humans, who, let’s face it, make lots of mistakes, and more on innovative best practices and tools that can be implemented far faster and more successfully than any static corporate policy.”

Join Threatpost for “Fortifying Your Business Against Ransomware, DDoS & Cryptojacking Attacks” – a LIVE roundtable event on Wed, May 12 at 2:00 PM EDT. Sponsored by Zoho ManageEngine, Threatpost host Becky Bracken moderates an expert panel discussing best defense strategies for these 2021 threats. Questions and LIVE audience participation encouraged. Join the lively discussion and Register HERE for free.

 

Suggested articles