U.S. Elections True Test for Facebook’s Disinformation Crackdown

Facebook continues to address the challenges faced during the 2016 election.

As the U.S. midterm elections commence on Tuesday, all eyes are on Facebook and other social-media companies to see how they continue to crack down on misinformation and other political meddling efforts on their platforms.

Facebook for its part on Monday evening said it has barred an additional 115 accounts (including 30 Facebook accounts and 85 Instagram accounts) that may have been engaging in what it called “coordinated inauthentic behavior” — that is, making moves to mislead users about who they are and what they are doing.

“We immediately blocked these accounts and are now investigating them in more detail,” said Nathaniel Gleicher, head of cybersecurity policy of Facebook, in a post. “Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate.”

Gleicher said that given that the elections are this week, Facebook chose to immediately go public with the action after the company was contacted  by U.S. law enforcement on Sunday.

Facebook and other social-media companies have spent the past two years – since the 2016 elections – grappling with how to combat political disinformation and other issues on their platforms.

Earlier in October, Facebook said it would expand content policing on the site by cracking down on accounts aimed at voter suppression and penalizing pages spreading political disinformation.

The social media giant has held true to its word, removing hundreds of pages and accounts that it said have spread spam or disinformation.

In October it said that had collectively removed more than 800 pages and accounts showing inauthentic behavior. In July, Facebook said that it removed 32 pages from its platform involved in “coordinated” inauthentic behavior. And in August, it made a 652-page dent in a sizable alleged Iran-backed influence campaign that stretches back to 2017, with some pages in operation since 2013.

Twitter, for its part, also said it has also been taking steps to increase the “integrity” of posts related to political issues.

“Over the last several months, we’ve taken significant steps to safeguard the integrity of conversations surrounding the U.S. elections by reducing the spread of disinformation, strengthening outreach to government stakeholders, and streamlining our enforcement processes,” Bridget Coyne, senior manager of public policy at Twitter, said in a post last week.

But some still remain wary.

In a recently released blog, Jonathan Albright of the Tow Center for Digital Journalism, outlined a recent research project tracking the biggest issues that Facebook is currently facing – and found that the social-media giant’s interference challenges have grown since the 2016 presidential election.

Part of the problem, said Albright, is that “Facebook’s political ad transparency tools – and I mean all of them – offer no real basis for evaluation.”

He added, “After this undertaking, one impression that’s been gleaned is that it’s the scale of the problems, not the sum of the problems, that represents the greatest threat. The issues I’ve found on Facebook the past few months… involve patterns that have been on the radar of the company’s leadership and American politicians since the last election.”

 

Suggested articles