Black Hat 2020: Open-Source AI to Spur Wave of ‘Synthetic Media’ Attacks

deepfake AI

The explosion of open-source AI models are lowering the barrier of entry for bad actors to create fake video, audio and images – and Facebook, Twitter and other platforms aren’t ready.

An abundance of deep-learning and open-source technologies are making it easy for cybercriminals to generate fake images, text and audio called “synthetic media”. This type of media can be easily leveraged on Facebook, Twitter and other social media platforms to launch disinformation campaigns with hijacked identities.

At a Wednesday session at Black Hat USA 2020, researchers with FireEye demonstrated how freely-available, open-source tools – which offer pre-trained natural language processing, computer vision, and speech recognition tools – can be used to create malicious the synthetic media.

Synthetic media includes fake videos, voices and images that can be used in various malicious cases. For instance, cybercriminals can use generative text to forge legitimate-looking spearphishing emails. And at a bigger scale, this fake media can be used to create more malicious content, such as “fake porn” videos weaponized to harass targeted women. In other cases, synthetic media can be used to sway public opinion, like a wide-scale disinformation campaigns using phony, but recognizable, personas.

“Fine tuning for generative impersonation in the text, image, and audio domains can be performed by nonexperts… [and] can be weaponized for offensive social media-driven information operations,” said Philip Tully, staff data scientist at FireEye, and Lee Foster, senior manager of information operations analysis at FireEye during a Wednesday session.

Low Barrier to Entry

The world is currently facing the “calm before the storm” when it comes to the malicious use of synthetic media, Tully warned.

For one, social media has also “greased the wheels” for this type of synthetic content to actually have a malicious impact, said Tully. Social media companies often do not require high bars of credibility, and offer a platform for content to go viral, allowing anyone to create fake media that is believable.

Secondly, the technology for creating synthetic media is becoming cheaper, easier, more pervasive and more credible – “drastically reducing the amount of time that it takes to make this happen,” he said. One such concept lowering the barrier to entry is called “transfer learning.” Previously, researchers using deep learning models to create fake content had to train two different data models. But in transfer learning, neural network models learn one task, and then the learning from that first task is used to fine tune the second task.

This concept has paved the way to a “rich open source model ecosystem,” said Tully. While these open-source models have many advantages – including for research and detection against malicious AI bots – they are also leading to real-world, malicious, fake content found on social media platforms.

black hat FireEye deepfake session

Click to Expand.

For instance, FireEye researchers uncovered a widespread influence campaign that impersonated and fabricated U.S. politicians and journalists on social media – with pro-Iranian interests (the campaign led to Facebook, Instagram and Twitter taking action against over 40 accounts).  In another instance, Foster said, networks of inauthentic social media accounts were discovered amplifying political narratives, such as pro-China networks targeting protestors in Hong Kong and pushing COVID-19 pandemic narratives.

Black Hat Demonstration

Researchers demonstrated various open-source models that are providing both good and bad actors with the means to create synthetic media content. For instance, for the creation of fake images they pointed to the style-based GAN architecture (StyleGAN), which allows data-driven unconditional generative image modeling. The GAN architecture consists of a “mapper” stage, which embeds inputs as visual features; a “generator” that synthesizes image from scratch, and a “discriminator,” which predicts whether real images and generated images are real or fake.

black hat USA deepfake session

Credit: FireEye

Researchers demonstrated voice cloning with SV2TTS, a three-stage deep learning framework enabling them to create a numerical representation of a voice from a few seconds of audio, and to use it for fake output. At a technical level, this starts with the input of a dataset into a “speaker encoder,” which embeds a speaker’s utterance. In the second phase, a text-to-speech platform called Tacotron2 generates a spectrogram from text that is conditioned on these utterances; and finally, a tool called WaveRNN model infers audio waveform from these spectrograms. Voice impersonation is another top threat that cybercriminals are focusing on – with a voice “deep fake” last year swindling one company out of $243,000.

Researchers finally demonstrated the generation of “synthetic text,” which can be achieved by fine-tuning the open source language model known as GPT-2. GPT-2 is an open source deep neural network that was trained in an unsupervised manner on the causal language modeling task. The model is trained so it can predict the next word in a sentence accurately – and ultimately form full sentences.

Researchers said, a bad actor could put an “input” of open source social media posts from the Russian Internet Research Agency (IRA), which they describe as a social media “troll factory.” This input would then create fine-tuned text generations as an output, which can then be posted by troll accounts as part of disinformation campaigns online – such as “It’s disgraceful that our military has to be in Iraq and Syria.”

The Future of ‘Synthetic Media’

There are various technical mitigations that protect against deepfakes. These include machine learning-based forgery detection, which may include looking at eye alignment, teeth abnormalities, ear asymmetry, no blinking, and other factors in multi-media content.

Social media platforms can also adopt content authentication measures, such as verifying accounts or moderating content for fact-checking. Facebook, Microsoft and a number of universities have meanwhile joined forces to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking for detection techniques.

However, the “detection, attribution, and response is challenging in scenarios where actors can anonymously generate and distribute credible fake content using proprietary training datasets,” said the researchers. “We as a community can and should help AI researchers, policy makers, and other stakeholders mitigate the harmful use of open source models.”

Check out Threatpost’s live Black Hat USA 2020 coverage, including news interviews, threat research updates and more, here.  

Suggested articles