Disinformation Spurs a Thriving Industry as U.S. Election Looms

Threat actors are becoming increasingly sophisticated in launching disinformation campaigns – and staying under the radar to avoid detection from Facebook, Twitter and other platforms.

In the years since the 2016 U.S. Presidential Election, threat actors have pieced together a new playbook for sowing confusion and doubt within the American electorate. On Wednesday, researchers with Cisco Talos released a report [PDF] that details how a number of these new sophisticated campaigns work.

There are now several different groups associated with influence campaigns, said Nick Biasini with Cisco Talos. Instead of directly launching their own campaigns, many state sponsored threat groups are working with independent, third-party entities – legitimate, private digital marketing companies – to engage in global influence operations, he said.

Listen to a podcast with Biasini below or download direct here.

“Unfortunately, what we’re starting to see is there are companies that are popping up that are offering this as a service and we’re already starting to see it spread and become more widely used than it was in the past,” said Biasini.

The thought of state sponsored actors working with these third-party companies is dangerous because it allows them to avoid detection, said Biasini.

“In most of these campaigns the true actors behind the disinformation don’t want to be exposed. If they can use a company to do it, it allows them to abstract themselves further,” he said.

Biasini talked more about how threat actors are changing the game in disinformation campaigns – and how social media companies are stepping up to the challenges in defending against misinformation –  during this week’s Threatpost podcast.

The 2020 Presidential Election is the topic of a recent Threatpost feature Shoring Up the 2020 Election: Secure Vote Tallies Aren’t the Problem and the focus of a Black Hat 2020 keynote address earlier this month by Renée DiResta, research manager at the Stanford Internet Observatory.

Listen to the full podcast below or download direct here.

Also, check out our podcast microsite, where we go beyond the headlines on the latest news.

Lindsey O’Donnell-Welch: Welcome back to the Threatpost podcast. This is your host, Lindsey O’Donnell Welch. I am a senior editor with Threatpost. And I’m joined today by Nick Biasini with Cisco Talos. Nick, thank you so much for joining us today. How are you doing?

Nick Biasini: I’m doing well. Thank you for having me. Looking forward to the discussion today.

LO: Should be a good one, Nick. We’re going to be talking about election security and, and Cisco Talos released a new report doing a deep dive into election security. And so, you know, there this has been a very hot topic lately. And it’s definitely been warranted with the 2020 US presidential election coming up in November. And, you know, over the past month at Black Hat and elsewhere, there’s been a ton of talk and worries about kind of everything from the integrity of voting machines to the expansion of mail-in voting, due to COVID, and then on top of that we have kind of these previous historical worries around the attack on the Democratic National Committee in 2016. And, and misinformation and influence campaigns in the 2016 election. And I think that’s all really coming to a head this month and next month. So amidst all this Cisco Talos today on Wednesday came out with the results of a, I believe it was a four-year investigation into election security. Is that right, Nick?

NB: So we’ve had, there’s a couple components to this, we’ve been looking at election security now for about four years, and it’s been a long process and with a lot of moving parts in it. We released the initial paper from that a few weeks ago, and now what we’re releasing on Wednesday is more of a deep dive specifically into disinformation and looking at how disinformation was used in the past. The techniques, the tools, that type of stuff, and kind of looking forward a little bit to what we can expect on the upcoming election and in the near term.

LO: Right, and you guys did a really kind of deep dive look at focusing in on the infrastructure to behind these disinformation campaigns and how bad actors are evolving their tool sets and techniques and what this means for the upcoming election. So can you tell us just from a broad level what you specifically looked at in the report and what the top takeaways were, that you really took away as it applies to this 2020 upcoming election.

NB: So what we did is we kind of went back and looked retrospectively at the campaigns that have been exposed in the past, the ones that have been talked about, and kind of analyzed, how those campaigns operated, what tools were being used, what did the infrastructure footprint look like? You know, what was the the use of shell companies and various other aspects of the the campaigns that they were running. We learned a lot. One of the most interesting things for me was there is just a lot of open source tooling out there to do social media interaction and things that could be leveraged by disinformation actors. I was really blown away not only from like, the ability to create bots and send and amplify content, but even for doing more of the efficacy tracking where people are doing things like evaluating campaign effectiveness, and trying to determine where they want to go in the future. That that was really interesting to me. And one of the other big takeaways is, this is becoming an industry. Unfortunately, what we’re starting to see is there are companies that are popping up that are offering this as a service and we’re already starting to see it spread and become more widely used than it was in the past. And that’s probably one of the more concerning things is, you know, our focus has been primarily around big elections like the US presidential election that’s coming up. But what happens is disinformation begins to make its way into more everyday occurrences and you start seeing other entities start participating in disinformation? The ability in the the amount of eyes looking at the content, we’ll make those interesting challenges that we’ll have to address in the future.

LO: Right now, kind of on the heels of that, can you talk a little bit more about the different types of actors and organizations that were involved here with some specific examples because I do think that is a really big takeaway in terms of this becoming more of a broad level offering in the threat landscape and what that really means for elections going forward, not just in the US but around the world. So what were you seeing specifically there?

NB: So there are several different groups that you have associated with this, you know, you have those, you know, groups that are associated with states that are doing work that we’ve seen, you know, with, like the IRA investigation with Russia in 2016, and some more recent things. We’ve seen actually more of a shift away from a direct interaction. So instead of a state sponsored group, you know, working directly to try and influence a campaign, they’re starting to leverage these other agencies as well. Again, it allows them kind of to abstract a layer right? In most of these campaigns, the true actors behind the disinformation don’t really want to be exposed. So if they can use a company to do it allows them to kind of abstract themselves further, that that is really one of the big things that we’ve seen and we have seen both independent and those that are state linked operate. One of the key differentiators for me is I was actually kind of surprised at how easy it is to actually launch a disinformation campaign. The challenge comes in actually making it effective at a large scale, right? You can spread disinformation, but being able to get it into mainstream media, to be able to change people’s minds, and sew discord, is a completely different process. And, to their credit, social media companies have done some things to help introduce some roadblocks to those groups now in 2020, as opposed to what we were dealing with in 2016.

LO: Yeah, no, I’m curious, from a threat actor’s perspective here. To that point, what are the main challenges that have been introduced by social media companies? What has happened to make this either more difficult, or would you say that it’s, it’s about the same or easier or what are you seeing there?

NB: So it kind of depends on the big thing around the detection and stuff that they’re doing. They’ve gotten really good at detecting bot behavior. And some of that is is common sense, right? If a account responded to, let’s say a Facebook post or a tweet in less than two seconds, it’s unlikely that that’s a person, right? Someone has to read and process. But if it’s programmatic, it doesn’t have to be that long. And then some of the other things that they’ve done that have really made this more difficult is the the use of cell phones. So if you take Twitter, for instance, they’ve taken the step of requiring each account to have a phone number linked to it. And each account or each phone can only have 10 accounts linked to it. And they’ve done a good job at limiting things like the online phone services where you can get text messages and stuff like that. So now you’re talking about having to physically have SIM cards. And for these campaigns, you’re talking about managing thousands of accounts. So one of the things that we really found interesting was how you now need, like $10,000 in hardware to be able to manage all these SIM cards to be able to do this type of campaign, which is something that most average people that want to do this type of stuff are not going to be able to do. They’ve really kind of raised the bar and forced people to really kind of invest in this type of attack if they want to launch these types of campaigns now.

LO: Right, right. Well, one thing I also noticed in the report, though, is this might be harder now, based on these protections for threat actors to kind of launch these campaigns, or at least get them moving off the ground, but it’s still – based on what you guys were looking at – seems incredibly easy to go onto Facebook and find groups of people with with similar interests and similar political ideological viewpoints. And for bad actors, I mean, this still is a pretty easy way to kind of get that content in front of  people from the get go, right? I mean, you guys had conducted a case study where you looked for, I believe it was Texas based Facebook groups for Democratic presidential candidate Joe Biden. And it was pretty easy to conduct that search and find those people with that viewpoint. And so do you think that that is a problem? Or is that just something that we’re going to have to deal with on social media platforms?

NB: Yeah, part of the problem is this concept of information silos that that have kind of popped up. So you mentioned Facebook groups, you know, it is increasingly easy to find people to communicate with that have similar interests to you. So you can go on to Facebook and type in you know, whatever thing that you’re interested in and find a plethora of groups, and not only groups in general, but groups local to you about that. And that’s doubly true for things like political beliefs, because there’s a lot of strong opinions on the both sides of that. So you’re right, it is still relatively easy to find those types of groups. And even beyond that. The bigger issue is, that was only searching once available publicly. There’s a whole other side to that of private groups that are far more difficult to identify. And we talked a little bit about the Facebook groups and the usage that was found in the Roger Stone campaign – the Roger Stone takedown that happened in Facebook a while ago – where they kind of showed how these Facebook groups can be used pretty well to kind of focus a message and to kind of be an initial springboard to start your disinformation, right. So you create your content, and then you go to a place like a Facebook group or some other small group of like minded people with similar beliefs and use that as your initial point to start spreading, because that gives you a friendly environment to start spreading potential disinformation in.

LO: Right. And I believe that was one of the takeaways from the report too, right, that threat actors will increasingly use private social media groups to kind of leverage the policy exceptions that are built into these platforms for elected officials.

NB: Yeah, that’s that’s one of the other things is this, the politicians exemption thing is a little bit of a concern, because there are ways for elected officials to have a little bit more leeway in what they post and what they say than what average citizens or regular people do. So that there’s a potential there for abuse that could be leveraged in the future as well.

LO: Right.

NB: And then you have the whole concept of things like deep fakes, which is coming into the discussion more and more, and how that’s going to add another layer on top of this in the near future as well.

LO:Right, yeah, deep fakes. That’s going to be huge, probably for I guess, more from a content side, though, but for what it means bringing in kind of these new levels of misinformation and how to be able to kind of block out that misinformation. So I’m very curious where that’s going to go over the next few years for sure.

NB: Yeah, I mean, in a lot of ways. It’s a study of human psychology, right? Understanding that increasingly in today’s world, you have to be skeptical of what you read, and make sure that you’re fact checking the things that you’re reading. But there’s also this whole aspect with deep fakes, where you may not be able to trust your eyes as much as you used to be able to and what you can see. So it really is going to be an interesting thing, especially for people that aren’t familiar with this type of technology as it becomes more and more widely used.

LO: Right, right. Well, beyond deep fake and some other tactics that we foresee being used increasingly, what are some of the other kind of techniques and tactics that you saw over the past few years that have been increasingly adopted by the actors behind these campaigns, whether it’s using paid or stolen accounts or other techniques to really avoid detection, what are you seeing there?

NB: So it’s kind of interesting. As an information security professional, I kind of go back to what I see on the threat landscape every day. And what I’m seeing is evolution from actors, which I see all the time, which is, they’ve started to realize the ways that they get detected and are figuring out ways to do things outside of those typical methods. So take for instance, when they’re creating a lot of these sock puppet accounts, they have to use photos. So one of the common things they would do initially is just grab photos off of the internet as their stock photo. Well, we’ve gotten much better at doing reverse image searches and identifying that so now they take the additional step of doing things like mirroring the image or cropping it or making subtle changes. So that those image links don’t happen. Likewise, we’re seeing evolution in the ways that they’re doing their disinformation, you know, we had in the past we had them standing up their own websites. And then kind of funneling the the content into that and using that as a springboard to push outward. Now we’re seeing, you know, other publications being used where they’re getting, fake reporters are building enough of a reputation to be able to get published into various different types of publications directly. And then more recently, there’s evidence that they may start, they may be starting to actively exploit news websites and add content directly onto the site. So as you can see it again is is this evolution that we see in the threat landscape all the time, just happening in a different space than we’re used to. So this is this type of behavior is going to continue, you know, as as you detect ways to or as they develop ways to detect the behavior. The actors are going to figure out ways to bypass past that detection. And that’s largely what we’re seeing.

LO: Right. And I’ve seen a couple of recent reports of threat actors even beyond the US doing these types of malicious behaviors in other countries as well. I believe I’ve seen a couple of instances where, you know, there were some watering hole attacks and things like that, that were used against news organizations, which then would be used to spread disinformation and things like that. So I do definitely agree that we’re seeing these threat actors really expand their, I guess, sophistication really around these types of attacks. So I was also curious too, with everything going on with the pandemic and COVID does that make any sort of impact in the threat landscape around misinformation and what that means for election security? Was that something you guys looked at at all?

NB: We did, we looked more specifically at disinformation specifically around COVID. And then we mentioned that briefly in the in the write up that we’re releasing on Wednesday. And there’s a more thorough blog coming in the near future that that will kind of delve into that a little bit deeper. But that that was primarily around the disinformation work that China has been doing around COVID-19. So we are definitely seeing disinformation from all sorts of different countries around COVID. But I haven’t really seen a whole lot of overlap between that and specifically, the election stuff that we’ve been working on.

LO: Right. Interesting. Well, something to keep an eye on. And and I guess, before we wrap up, I wanted to ask you a little bit more about kind of what we should be anticipating in terms of up and coming threats and how election security threats will continue to change.

NB: Well, I mean, in all honesty, we are in a much, much better place today than we were four years ago. The secretaries of states and the various organizations are far more aware and prepared for what potentially will be coming and are doing a much better job at trying to be proactive about it. It’s a very difficult problem. And it’s not one that will have an easy solution. And in all honesty, the thing that we should be expecting is for the tactics to completely change and evolve. So the things that we’ve seen in the past aren’t necessarily what we’re going to see in the future. And these adversaries are going to try new things to figure out ways to get disinformation out there. And you need to be prepared and willing to do the additional, you know, take the time and do the work to think about what you’re reading, validate what you’re reading, especially before sharing it, because that that’s where this problem comes from is people’s initial reaction, seeing something that they like or agree with and immediately deciding to disseminate without actually fact checking.

LO: Right. Yeah. And I know that, you know, you had mentioned before, what social media companies are doing to try to stop this are the protections being put in place there. But, you know, I think I do agree that I think public awareness has definitely increased about kind of fake content or fake news being spread on social media platforms. And I know in the research that you had mentioned, I think it was called Secondary Infection campaign, where that was a misinformation attempt that that failed in that the operations, fake stories never gained traction. So I think, as you had mentioned, this is a good example of how there is an increased public awareness about kind of disinformation out there, especially after 2016.

NB: Yeah, Secondary Infection is an interesting campaign. It’s unlike most of the other ones that we studied. They they primarily used the online form Reddit, and used single use accounts instead of trying to use more of the aged or the paid or stolen accounts. And the effects weren’t nearly as effective. So it’s an interesting case study to look at, to kind of see how they’re continuing to try different avenues, and not all of them are successful.

LO: Right, right. Well, Nick, thank you so much for coming on to the Threatpost podcast today to talk about some of these challenges with election security as it relates to misinformation. And I know that there is a lot to be looking out for in the next few months leading up to November.

NB: Absolutely. Thank you so much for having me.

LO: Great. And once again, this is Lindsey O’Donnell Welch with Threatpost talking to Nick Biasini with Cisco Talos. Be sure to catch us next week on the Threatpost podcast.

On Wed Sept. 16 @ 2 PM ET: Learn the secrets to running a successful Bug Bounty Program. Resister today for this FREE Threatpost webinar “Five Essentials for Running a Successful Bug Bounty Program“. Hear from top Bug Bounty Program experts how to juggle public versus private programs and how to navigate the tricky terrain of managing Bug Hunters, disclosure policies and budgets. Join us Wednesday Sept. 16, 2-3 PM ET for this LIVE webinar.

 

Suggested articles