News Wrap: Deepfake CEO Voice Scam, Facebook Phone Data Exposed

From deepfake to data exposures, the Threatpost team talks about the top security trends driving this week’s biggest news stories.

In this week’s news wrap ended Sept. 6, the Threatpost team breaks down the biggest news of the week, including:

  • Cybercrooks successfully fooling a company into a large wire transfer using an AI-powered deepfake of a chief executive’s voice (and Facebook, Microsoft and a number of universities joining forces to sponsor a contest promoting research and development to combat deepfakes)
  • A leaky server exposing phone numbers linked to the Facebook accounts of hundreds of millions of users in the latest privacy gaffe for the social media giant.
  • Facebook allowing its users to opt-out of the Tag Suggestions feature, while at the same time attempting to help users better understand what the feature does.
  • The challenges behind patch management, and why 80 percent of enterprise applications have at least one unpatched vulnerability in them.

For the full podcast, see below. For direct download, click here.

Below find a full transcript of the Threatpost podcast news wrap for the week ended Sept. 6.

Lindsey O’Donnell: Hi, welcome back to the Threatpost news wrap podcast. You’ve got Lindsey O’Donnell here with Tom Spring and Tara Seals to talk about the biggest news of this past week. Thank you for joining us today to chat about the top security stories the week

Tom Spring: Hey Lindsey, happy to be here.

Tara Seals: Hey Lindsey, thanks for having us.

Lindsey: Thanks for coming on. We’ve had a bit of a short week of Labor Day on Monday here in the US, but that hasn’t really stopped the stream of news coming through this week. One story that really caught my eye was an incident that was disclosed early this weekend where a cyber crook used a deepfake audio to swindle a company out of hundreds of thousands of dollars. Did you guys see that?

Tara: Yeah, that that was just crazy. You know, we hear a lot about deepfakes, which obviously (or maybe not so obviously) are when you create a fake audio or a fake video that that’s extremely convincing, of a public figure. Or someone like a company CEO. And you heard a lot about that type of thing being used in [fake] news, like influence campaigns and things like that. But in this case, it was actually used to scam a ton of money out of this company, right?

Lindsey: What happened was the cyber criminals were able to create an almost perfect impersonation of a chief executive’s voice using I assume artificial intelligence that wasn’t confirmed, but that’s what the speculation is at this point, they then use that audio to call a top ranking executive within the company who is in a different office. So I think that executive was located in London, whereas the CEO would have located in Germany, and they use this voice impersonation to convince him to transfer payments $243,000 to their bank account. And they used all kinds of classic phishing techniques such as, “this is urgent, you need to do this right away, you’ll be reimbursed” but of course, he wasn’t reimbursed and after the incident happened, they then made away with this large sum of money. So it really gives food for thought there in terms of the impact of deepfake. And I know we see a lot of malicious uses that create a lot of speculation about kind of the malicious implications of deepfake.

Tara: Well, it’s so interesting to me, that these types of things are starting to be seen out there actually used in campaigns.  I always used to think it might be kind of simple to spot these, obviously there’s going to be something there that that doesn’t quite jive. But apparently they’re very, very, very close to the real thing, right?

Lindsey: Yeah, it’s a really good question. And if you look back even five years ago, there was that classic instance of where you could paste someone’s face on someone else’s body. And that was obviously something that they could do five years ago. But now, I think it’s so much harder because they’re using artificial intelligence and machine learning, and all these technologies to really better understand the images, and in this case, the audio. And so it’s so much harder to really understand and discover that and, I think at this point, trying to figure out exactly how to recognize something as a deepfake is what a lot of companies are trying to figure out. I know that I think it was actually today, Facebook, together with Microsoft and a couple of academic companies announced that they were making a big public challenge. I think they were rewarding people up to $10 million in grants and awards to try to make it easier to spot this kind of fake type of content. So I think it’s something that companies are really trying to make a move on in terms of detecting that. At this point it is a top concern for a lot of people, Tara, as you mentioned, there’s a lot of concern around how deepfake will be used in the future for misinformation for future politics, but even just that a more mundane level, spam callers could use the fake audio to impersonate family members and obtain personal information about people or criminals could use it to gain entrance to secure high security clearance areas. So I think it’s really just a top concern and the top threat at this point that people are on the lookout for.

Tom: It last last year, not this year, at Black Hat, I went to a session with two researchers from Salesforce and they did a session on voice-authentication being broken. Two things come to mind that are really interesting about their talk and this news. One is, we’ve heard a lot about the theoretical types of attacks and abuse using artificial intelligence. And your article is a really salient real example of how the theoretical is now a real attack vector. Now, thinking outside the context of the guy faking the the CEOs voice – the Salesforce researchers assert voice authentication is an extremely sensitive area and ripe for hackers to to use digitally manipulated voices to gain access to personal information. Especially when you consider all the smart speakers and a lot of artificial intelligence. These voice interfaces allow you to check email, check texts, call people on your private contact list. So it is easy to see how high the stakes have become here. They are certainly a lot higher now than they were even just two years ago with Black Hat.

Lindsey: Right. Yeah, Tom, I remember that article that you wrote, about the drawbacks of voice authentication and how it’s scarily easy to manipulate voices in this manner. And I remember the researchers in that Black Hat session said that they were able to merely grab audio off of YouTube and then train the model and generate music and with that they were able to use that text to speech algorithm to, like you say, break into different voice speakers and or smart speakers, or all other types of malicious action. So I think it’s becoming so easy at this point.

Tom: When you’re thinking about machine learning and how a computer understands and hears voices, it gets even more nutty.  Google has done some research on this. They were able to create audible commands embedded in a song. I mean, you couldn’t hear anybody’s voice. But what they were able to do was to use just the right pitch, and just play the right tones and play the right signals to actually get an (Android phone) to launch apps. And it was so creepy because when they play the audio, all you heard was like Beethoven’s Fifth. And what it actually did was open up this person’s phone and email their contact list to somebody. It is some crazy stuff that’s going on with with machine learning and artificial intelligence. And the CEO story is pretty much the tip of the iceberg. We’re going to start seeing the theoretical become real life attacks.

Lindsey: I know deepfake is something that is what we’ve been looking at and will continue to look at in the coming months as more of these come into play in real life situations. But some other really big news this week was this Facebook news, which actually broke late Wednesday night, early Thursday morning. What was it, like a leaky server was exposing millions of phone numbers from Facebook users?

Tom: I believe they they figured it out that to where it was a Amazon storage bucket that left 419 million Facebook user phone numbers open for the wild to grab. So yeah that’s another war egg on on Facebook’s face.

Lindsey: Yeah. It just seems like every single week we’re dealing with more privacy and security issues with Facebook.

Tom: Well, it all it all goes back to Cambridge Analytica, right? It’s a huge rat’s nest of problems. I gotta say, as these as these giant tech companies are facing increasingly more and more scrutiny. I know that they’re there. There are some hearings, continued hearings going on in Washington’s and antitrust investigations. When you own billions of names and you are that big one tiny little mistake can have an enormously huge amount of ripple effect across the globe. I’m not getting political. I’m not trying to advocate anything here. But when you’re just as damn big as Facebook – the consequences are so much higher. It boggles the mind – whether it be Google, Microsoft, Facebook – what will be reading about in two years about a leak that happened this week. I guess we’ll find out in two years what it was.

Tara: And actually, I think that can be a new Threatpost content area: “Leak of the week.”

Lindsey: Yeah, and I mean, I’d be willing to bet that it might be our biometrics which that was another issue that Facebook came across this week. They basically were, they’ve been part of an ongoing lawsuit against how they are collecting facial recognition and biometrics about their users and that issue came to a head this week when they announced they would allow people to, opt out of that Tag Suggestion feature on its platform. So that was that was an announcement they made this week. But it also goes to show that our facial recognition and biometrics and our images are just another piece of data that they have.

Tara: Yeah. Lindsey, I had a quick question on that facial recognition story, because I know you wrote it up. And so where are they getting the facial recognition scans from? Is it like when you unlock your iPhone or something and they somehow harvest that or…?

Lindsey: So what they do is that they have what’s called a face recognition template, which each user gets in unique number and that allows Facebook to analyze photos and videos of that person so they’d need your photos and videos that are on your Facebook account. And what they will do is scan your photo or pictures of your face and your video or media like live video, and they will analyze that and compare it to other photos and videos on Facebook. And what they were doing before is they would look to all your friends’ photos and videos. And if they were able to recognize your face, facial features in those photos or videos, they would suggest that you’d be tagged in them so if you’ve ever been who’ve ever posted a photo and you see, “tag Tom spring” or “tag Lindsey O’Donnell” that was face recognition in work.

Tara: Got it. Yeah, that makes sense.

Lindsey: And what I am interested in knowing is where they store that, if they’re utilizing that for any other purposes, and one big part of facial recognition is consent, which is actually tied into the lawsuit against Facebook; how users can consent to this information being stored and utilized as it is, or collected and utilized. So that seems to be their objective here by allowing users to opt out of the program and basically tell Facebook “do not tag me anymore, or suggest to tag me in photos,” that appears to be their form of consent. But I’d be curious looking into using the facial recognition technology in other ways, and this is their way to bring consent to those ways as well. Time will tell.

Tom: Google Photos has a feature and I was just in Google Photos earlier this week, and I noticed that at the top, they have little buckets of faces. So they’ll have a little little folder of my face. And it’ll show me all the pictures where my face is. I thought that made sense. It had my friends (faces) and you had my images of my dog. It also had some of my family members. But then I noticed all of these like strangers faces started showing up in these buckets. And I couldn’t figure out what happened. And I realized, when I looked at the pictures, I went to a ball game at Fenway Park, and I took some crowd shots. And and I was taking lots of photographs. And I had the same stranger in the background of like four or five pictures that I took. It really freaked me out that I was tracking this one person through Fenway Park, based on facial recognition data that I was generating through Google Photos. It was it was creepy.

Lindsey: Yeah, I know Apple has the people feature for your photos. So yeah, like you, Tom, it’ll collect all the photos of my dog or something and put them all together.

Tom: But it’s also super helpful. If I’m looking for pictures of my dog, I can just type in “dog” or something – and then boom, boom, you know?

Lindsey: Yeah. Oh, Tom, you had a really cool feature that went live on Tuesday about patch management. And this has been a big topic with you leading off a webinar on it last month. What were some of the main takeaways of your feature?

Tom: The feature was really was a great opportunity for me to synthesize a lot of the information that I’ve been learning about when it comes to patch management. You know, just wrapping your head around. The enormous task of patch management is it’s pretty daunting. I mean the statistics alone 80% of enterprise applications and at least one unpatched vulnerability, according to Veracdode. Another another good and exclusive data point I was able to report was the average time to patch is 63 days to in 2017. And in 2018, it takes 81 days on average to patch a vulnerability. Those exclusive numbers come from Edgescan.

Tom: I think the bottom line here is, is that this is problem is not necessarily getting any easier to solve. You’ve got a lot of bug bounty programs now that are just generating a whole lot more vulnerabilities bugs and not generating vulnerabilities. They’re identifying vulnerabilities. They’re creating a lot more work for people to patch and you’ve got that flood of new gear that’s hitting the market, IoT, etc. Add to this a more complicated infrastructure and it creates a very heavy load on, on patching (teams) and on companies to patch everything that they’ve got.

I talked to a lot of experts about how to build a better patch process and how they make sense of CVSS scores and how they are attracted to the ones with the highest rating and making sure they patch it as quickly as possible. I thought that one person from Flashpoint summed it up pretty good. They said: “The CVSS score assigned to a vulnerability reflects severity, not risk.” The point, patching is relative to what your company’s doing and the risk associated with that.

A couple other highlights: I discuss a lot about automating the patch process and how important that is…. so you’re not constantly patching one-offs. You need to have a system that takes care of these vulnerabilities. That takes patch management into something more like security management – which takes it to the next level of the DevSecOps. That approach to, to addressing security, lends itself to companies taking a more defense-in-depth approach to identifying vulnerabilities and fixing them. That speaks to getting the DevOps side of the security teams together and working together to constantly create better security within within applications that are heavily dependent on different libraries and different code – from a lot of different open source projects.

Lindsey: Well, Tom and Tara, thanks for coming on to talk about the biggest stories of the week. There was a lot there this week, and I’m sure that we’ll have even more next week with Patch Tuesday –  Speaking of patch management – all these other events coming up.

Tara: Patch Tuesday is next week. That’s right. Ah, man.

Tom: It brings chills up my spine.

Tara: Thanks so much Lindsey it was great talking to you guys.

Lindsey: Yeah, have a great weekend. And catch us next week on the Threatpost podcast.

Suggested articles

The State of Secrets Sprawl – Podcast

In this podcast, we dive into the 2022 edition of the State of Secrets Sprawl report with Mackenzie Jackson, developer advocate at GitGuardian. We talk issues that corporations face with public leaks from groups like Lapsus and more, as well as ways for developers to keep their code safe.