Facebook is banning deepfake videos, which stem from a technique of human-image synthesis based on artificial intelligence (AI) to create fake content.
Over the past year, security experts and lawmakers have voiced concerns about malicious deepfake applications, particularly as a vessel for disinformation on social-media platforms ahead of the 2020 elections. Facebook on Monday said it will remove misleading videos from its platform — however, it will not crack down on all doctored content, such as “satire” video, as the company attempts to walk the thin line between free speech and misinformation.
“Today we want to describe how we are addressing both deepfakes and all types of manipulated media,” said Facebook vice president of global policy management, Monika Bickert, in a Monday evening post. “Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.”
Facebook said it will remove videos that meet the following criteria:
- If videos have been edited or synthesized in ways that aren’t obvious to an average person and intend to mislead people into thinking that a subject of the video said words that they did not actually say.
- If videos are the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
The policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words, Bickert said. Facebook also did not clarify what the “ways” are that wouldn’t be obvious to “an average person.”
These loopholes point to the difficult process of weeding out deepfakes. Top deepfake artist Hao Li has warned that in 2020, deepfake videos will be completely undetectable – presenting challenges to Facebook and other social-media websites.
In an October interview with Threatpost, Li pointed out that already, fake pictures and news have spread out of control on social-media platforms like Twitter and Facebook, and deepfakes are just more of the same.
“The question is not really detecting the deepfake, it is detecting the intention,” Li said. “I think that the right way to solve this problem is to detect the intention of the videos rather than if they have been manipulated or not. There are a lot of positive uses of the underlying technology, so it’s a question of whether the use case or intention of the deepfake are bad intentions. If it’s to spread disinformation that could cause harm, that’s something that needs to be looked into.”
Despite the difficulties of identifying deepfakes, social-media sites are recognizing the need to crack down on the manipulated, misleading videos.
Facebook, Microsoft and a number of universities joined forces in 2019 to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking into detection techniques.
Reddit and Twitter also have their own deepfake policies; both directed Threatpost toward their policies against spreading misinformation.
Twitter said that its policies work toward “governing election integrity, targeted attempts to harass or abuse, or any other Twitter Rules.”
On Reddit’s end, “Reddit’s site-wide policies prohibit content that impersonates someone in a misleading or deceptive manner, with exceptions for satire and parody pertaining to public figures,” a Reddit spokesperson told Threatpost. “We are always evaluating and evolving our policies and the tools we have in place to keep pace with technological realities.”
Facebook on Monday also said it has partnered with news provider Reuters to create an online training course intended to help newsrooms better identify deepfakes and manipulated media.
“As these partnerships and our own insights evolve, so too will our policies toward manipulated media,” said Bickert. “In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.”
Concerned about mobile security? Check out our free Threatpost webinar, Top 8 Best Practices for Mobile App Security, on Jan. 22 at 2 p.m. ET. Poorly secured apps can lead to malware, data breaches and legal/regulatory trouble. Join our experts to discuss the secrets of building a secure mobile strategy, one app at a time. Click here to register.