Google Updates Ad Policies to Counter Influence Campaigns, Extortion

google ads policy

Starting Sept. 1, Google will crack down on misinformation, a lack of transparency and the ability to amplify or circulate politically influential content.

Google is making two changes in its advertising policy as the U.S. moves into the fall election season ahead of the presidential contest in November, in an attempt to thwart disinformation campaigns.

For one, Google is updating its Google Ads Misrepresentation Policy to prevent coordinated activity around politics, social issues or “matters of public concern,” by requiring advertisers to provide transparency about who they are. As of Sept. 1, this will mean big penalties for “concealing or misrepresenting your identity or other material details about yourself,” the internet giant said in a recent post, adding that violations will be considered “egregious.”

“If we find violations of this policy, we will suspend your Google Ads accounts upon detection and without prior warning, and you will not be allowed to advertise with us again,” according to the announcement.

Coordinated activity (i.e. the use of ads in cooperation with other sites or accounts to create viral content and an artificial echo chamber) has been seen as a hallmark of disinformation and fake-news influence campaigns. Social media platforms have cracked down on fake accounts ever since such operations were discovered to be widespread during the 2016 presidential election.

For instance, in June, Twitter took down three separate nation-sponsored influence operations, attributed to the People’s Republic of China (PRC), Russia and Turkey. Collectively the operations consisted of 32,242 bogus or bot accounts generating fake content, and the various amplifier accounts that retweeted it.

Advertising however hasn’t been in the content-policing crosshairs in the same way as content accounts on social media platforms – something that Google is now correcting.

“The changes Google is implementing around misrepresentation are timely as we come up to an election period,” Brandon Hoffman, CISO at Netenrich, told Threatpost. “Certainly nobody can deny the power of the advertising machine for getting an agenda out there. The manipulation that can be achieved with such advertising systems can be considered tantamount to a cybersecurity issue. Putting policy measures in place and making them known well in advance is a positive gesture in the attempt to stem the tide of misinformation that is almost certain to come our way over the coming months.”

He added a caveat, however: “Unfortunately policy and the enforcement of policy is subject to the effectiveness to the controls put in place to identify the abuse. This draws a parallel to other cyber security issues we see where controls are constantly are being updated and enhanced yet the volume of security issues remains unabated.”

The second change, also taking effect September 1, involves the launch of the Google Ads Hacked Political Materials Policy. The aim with the launch is to prevent hacked materials from circulating, by preventing the marketing of them – specifically within the context of politics. This can make politically motivated extortion or influence attempts less effective.

“Ads that directly facilitate or advertise access to hacked material related to political entities within scope of Google’s elections ads policies [are not allowed],” according to Google. “This applies to all protected material that was obtained through the unauthorized intrusion or access of a computer, computer network, or personal electronic device, even if distributed by a third party.”

Violations will draw a warning, and then account suspension seven days later, if the warning isn’t heeded.

“Note that discussion of or commentary on hacked political materials is allowed, provided that the ad or landing page does not provide or facilitate direct access to those materials,” according to Google.

“I speculate that Google is trying to prevent the use of political materials obtained by hacking with a strong takedown policy so that episodes such as the DNC hack and subsequent reporting are treated in a more fair and legitimate manner,” Fausto Oliveira, principal security architect at Acceptto, told Threatpost. “That, combined with a policy that attempts to dissuade third parties from misrepresenting their identity, is in my opinion a pre-emptive move ahead of the U.S. presidential elections. I believe that other media organizations should adopt the same standard, not only for the U.S. elections, so that they can help avoid the spread of misinformation, stolen information, fake news and trolling, with strong takedown policies to ensure that information is factual, legitimate and protected from internet trolls.”

Complimentary Threatpost Webinar: Want to learn more about Confidential Computing and how it can supercharge your cloud security? This webinar “Cloud Security Audit: A Confidential Computing Roundtable” brings top cloud-security experts from Microsoft and Fortanix together to explore how Confidential Computing is a game changer for securing dynamic cloud data and preventing IP exposure. Join us  Wednesday Aug. 12 at 2pm ET for this FREE live webinar with Dr. David Thaler, software architect, Microsoft and Dr Richard Searle, security architect, Fortanix – both with the Confidential Computing Consortium. Register Now.

Suggested articles