Facebook plans to expand its content-policing on its site, aiming to crack down on profiles and pages that it deems are aimed at voter suppression ahead of the 2018 U.S. midterm elections.
Specifically, social-media giant will penalize those that spread disinformation about voting requirements with a ban or by burying their posts, with a warning label, deep down in the News Feed. It also said that it will dedicate a team to fact-checking any reports of violence or long lines at polling stations.
According to Jessica Leinwand, public policy manager at Facebook, this latest effort to reduce voter manipulation is designed to address new types of abuse, including claims that citizens can vote by text message, and statements about whether a vote will be counted (as an example, “If you voted in the primary, your vote in the general election won’t count”).
“We already prohibit offers to buy or sell votes as well as misrepresentations about the dates, locations, times and qualifications for casting a ballot. We have been removing this type of content since 2016,” she said in a posting Monday afternoon. “[We] are now banning misrepresentations about how to vote.”
To that end, Facebook has introduced a new reporting option for “incorrect voting info,” so members can flag voting information that seems to be misleading; it also has set up a dedicated reporting channel specifically for state election authorities who see misinformation efforts about their own infrastructure and policies.
It has also set up teams of third-party fact-checkers to review local reports of polling station issues (the aforementioned violence and long lines, but also events such as flooding or fire).
“Content that is rated false will be ranked lower in News Feed, and accompanied by additional information written by our fact-checkers (what we call, Related Articles) on the same subject,” Leinwand said.
Some content will be banned, as was the case in the Iranian influence operations that Facebook uncovered recently. However, “we don’t believe we should remove things from Facebook that are shared by authentic people if they don’t violate those community standards, even if they are false,” said Tessa Lyons, product manager for Facebook’s News Feed, speaking to Reuters.
Facebook, the world’s largest online social network with 1.5 billion daily users, has been addressing political influence campaigns and misinformation on a ramped-up basis since it came under fire for, some say, letting its platform become a playground for fake news and those looking to sway voter sentiment during the 2016 presidential election.
Last week, the company said it has collectively removed more than 800 pages and accounts showing “inauthentic behavior;” that is, making moves to mislead users about who they are and what they are doing.
During September’s Senate hearings on the subject, Sen. Ron Wyden (D.-Ore.) asked company COO Sheryl Sandberg how Facebook would deal with efforts to suppress votes, such as the text-message voting hoax.
“There is a long history in this country of trying to suppress civil rights and voting rights, and that activity has no place on Facebook. Discriminatory advertising has no place on Facebook,” said Sandberg. She said at the time that the company uses a mix of automated systems and human moderators reviewing content – so clearly this expansion of site-policing marks a redoubling of efforts for the company.