California Bans Deepfakes in Elections, Porn

california bans deepfakes

A pair of laws provides recourse for victims of deepfake technology.

California has passed a law that bans the use of deepfake technology in political speech, and for non-consensual use in adult content.

Deepfakes are a manipulation of images and recordings, created by artificial intelligence technology, that make it appear as if an individual is doing or saying something that they actually didn’t do or say. People’s heads can be inserted onto other people’s bodies, whole images can be inserted into existing photos, a person’s voice can be perfectly replicated, or a person’s mouth can be morphed to make it appear as though they’re saying words of the deepfake creator’s choice in a video. It’s similar to “Photoshopping” in that content is so realistic that it’s hard to tell it’s been doctored.

“What these deep fakes really do is attack the trust that we have in our perceptions. And so the old adage of ‘seeing is believing’ is really no longer something that we can abide by now,” said Sam Small, chief security officer at ZeroFOX, during a recent Threatpost webinar.

To combat the malicious use of deepfake technology, Governor Gavin Newsom has signed Assembly Bill 730 as a measure aimed at helping to shield voters from misinformation about candidates before the 2020 election; and A.B. Bill 602, which enacts civil penalties for creating and disclosing nonconsensual, digitally produced sexually explicit material.

On the political front, the law prohibits deepfakes to be used in influence campaigns. It bans “a person, committee or entity, within 60 days of an election, from distributing, with actual malice, materially deceptive audio or visual media of a candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

Deepfakes are excepted from the law if they include a disclosure stating that the content has been manipulated.

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality,” said assembly member Marc Berman, chair of the California Assembly Elections and Redistricting Committee, in a statement. “In the context of elections, the ability to attribute speech or conduct to a candidate that is false – that never happened – makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters….A.B. 730 will help deter nefarious deepfakes by holding accountable the bad actors who intentionally attempt to injure a candidate’s reputation or deceive voters into believing a candidate said or did something they never said or did.”

When it comes to adult content, A.B. 602 requires consent to be obtained prior to depicting a person in digitally produced sexually explicit material and by allowing that consent to be rescinded within three business days unless certain conditions are met. The bill also provides victims with a set of remedies in civil court.

“When deepfake technology is abused to create sexually explicit material without someone’s permission, it can cause irreparable harm to a victim’s reputation, job prospects, personal relationships and mental health,” said Berman. “Women are disproportionately being harassed and humiliated when their photos are scraped from the internet and seamlessly grafted into pornographic content. A.B. 602 will ensure that Californians are protected from this abuse.”

The passage of the laws follows the passage of a similar law in Texas which makes it a misdemeanor, punishable by up to a year in jail and/or a fine of up to $4,000, to create and distribute a deepfake video within 30 days of an election with the intent to injure a candidate or to influence the result of an election.

It should be noted that deepfakes are not theoretical. Hackers recently used an AI-powered voice spoof to coerce a CEO into thinking he was speaking to his boss. Complete with a German accent, the voice requested $243,000 be sent to a Hungarian supplier. The request was carried out.

Some have released cautionary examples of the technology at work:

“Deepfakes are a greater threat than campaigns targeting our election systems,” Chris Olson, CEO of The Media Trust, told Threatpost. “They are designed to undermine some key pillars of our democracy—justice and the press—as well as public health, by spreading false information about vaccination, and social cohesion, by amplifying any differences. The way to overcome these threats is not to apply more censorship, but to investigate and address the mechanism that spreads the messages.”

He added, “By addressing the vulnerabilities of the mechanism instead of the accuracy of the message, we can more effectively contain and, ultimately, cut misinformation campaigns at the knees. That mechanism is the digital supply chain that we get our news from and that nation states and adversaries are exploiting. Every player along the supply chain must scrutinize and gather intelligence on where those messages are coming from and what’s enabling it, then share and use that intelligence with trusted partners and third parties to cut off the problem’s source. Their business depends on the integrity of that supply chain, and so does our democracy.”

Not everyone backed the passage of the California laws, Citing free speech concerns, the American Civil Liberties Union urged Newsom to veto the measures, for instance: “Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation and repression of free speech,” Kevin Baker, the organization’s legislative director, said in a letter to the governor.

Mark Powers, vice president of the California Broadcasters Association, meanwhile said the bill would be difficult for radio and television broadcasters to comply with. “By passing this bill, you put your broadcasters in extreme jeopardy,” he told the Senate Elections and Constitutional Amendments Committee. “Broadcasters may choose not to accept any ads about candidates rather than risk liability for disseminating deepfakes, if verifying each ad’s authenticity proves too costly.”

What are the top cyber security issues associated with privileged account access and credential governance? Experts from Thycotic will discuss during our upcoming free Threatpost webinar, “Hackers and Security Pros: Where They Agree & Disagree When It Comes to Your Privileged Access Security.” Click here to register.

 

Suggested articles