News Wrap: PoC Exploits, Cable Haunt and Joker Malware

Are publicly-released PoC exploits good or bad? Why is the Joker malware giving Google a headache? The Threatpost team discusses all this and more in this week’s news wrap.

This week’s news wrap podcast breaks down the biggest Threatpost security stories of the week, including:

Listen to the full podcast below or download direct here.

Below is a lightly-edited transcript of this week’s podcast.

Lindsey O’Donnell-Welch: Welcome back to another Threatpost news wrap for the week ended January 17. And with all the Patch Tuesday craziness, it’s been quite the week. So you’ve got the Threatpost team here to break down the top security stories of this past week including myself, Lindsey O’Donnell-Welch as well as Tom Spring and Tara Seals. Tom and Tara, how are you doing today?

Tom Spring: Hi Lindsey.

Tara Seals: Hello Lindsey, thanks for having us.

Lindsey: So, this past week we had a ton of patches on Patch Tuesday, from Intel to Oracle to Microsoft. But beyond all that, there was also a ton of Proof-of-Concept exploit code that was published this week too in tandem with a lot of those vulnerabilities that were disclosed. And that included, I’m sure you guys have heard all about it by now, the Microsoft recently-patched crypto spoofing vulnerability that had been found and reported to Microsoft by the NSA, that really made a lot of big waves this week. And part of that was that two proof-of-concept exploits were disclosed after the vulnerability and patches were all released. So that was a big one. And then there was also proof-of-concept exploit code released for this remote code execution vulnerability in certain Citrix products. And then there was also one other PoC exploit code released for critical flaws in the Cisco DCNM or Data Center Network Manager tool.

So there were just a ton that were released this past week. And I thought that was kind of interesting, just in terms of the vulnerability disclosure process and kind of how ethical these proof-of-concepts being released is. And I’m sure there’s both pros and cons to it. But it’s always nice to have the research out there. But then on the other side of the coin, you know, publishing this exploit code can make it easier for bad actors to exploit the flaws if they want to.

Tom: Yeah, no, for sure. I mean, there was so many proof-of-concept exploits. I mean, it’s crazy. And you missed one, which was last week, which was a big one, was the exploit that fully breaks SHA-1 that Tara wrote about which lowers the bar for attack, you know, so we were talking about this before we started the podcast. And it’s such a hot button issue, and I can see the argument that it is important to publish these proof-of-concept attacks even if we’re sort of giving the bad guy a roadmap for how to attack the good guys, I know, you guys, I know it’s it’s not something we’re all on the same page on but I do believe that it’s shining a bright white light on a problem and and it’s shaking the security, people who should be paying attention, it’s forcing them to pay attention and ideally it’s it’s helping them better understand the vulnerabilities within their own platforms so they’ll patch fast and not drag their feet.

Tara: For me I can kind of see both sides a little bit but you know, when you have something like the Microsoft crypto bug, which if I understand it correctly, if properly exploited you can get code in there that looks like native Microsoft Windows code, right it can just slip past everything. And so you can see how that can be really dangerous. It was just disclosed as this week, and then all of a sudden, people even haven’t had time to patch yet, and you have this proof-of-concept out there.

So it just speeds up the amount of – or it enables attackers to take advantage of that window between disclosure and patch time, that, you know, everybody knows about and this researcher that dropped that…  it’s a little bit of like a chest beating, you know, “look what I can do. I totally got this exploit out there before anybody else.” And it does provide a blueprint for the bad guys, I feel.

On the other hand, the SHA-1 exploit that you were talking about, SHA-1 has been broken for years, and yet it still persists in legacy installations throughout the web. And even though you know, Microsoft won’t support machines that support it anymore, it still persists. And so the SHA-1 exploit wasn’t necessarily pioneering any new approach to exploiting it, but it was more that it was able to speed up the process and also lower the amount of money that it would take because collision attacks are really, really expensive because it requires so much processing power. So they were able to reduce the amount of processing power required, and so therefore the cost of it, so it puts it within reach of, you know, various types cybercriminals. And so in that particular instance, I can kind of see because it’s like, “Look, guys, you know, these guys are only going to keep getting better. So go ahead and patch the thing that you’ve had years to patch and haven’t or you know, or rather update to SHA-2, you know, you’ve had years to do that and you haven’t.” So you know, I really do think it kind of depends on the circumstance. But yeah, dropping that exploit for the Microsoft bug feels really unethical and irresponsible.

Tom: Well, point, counter-point. I don’t agree. I feel like the debate becomes a little bit more sharp and pointed when you’re talking about zero-day exploitations. But if you’re a Network Manager, and you care about security, you’re paying attention, and you may make some assumptions in terms of how your network is protected and can defend against attacks. And when you see a proof-of-concept attack, you can understand how you may make assumptions that are incorrect in terms of whether or not your network is secure or not. And again, I come back to, what I always hear from the security people that I speak to, is the bad guys are always one step ahead of the good guys. And if we make the assumption that the bad guys are smarter, or more well-equipped or are going to be faster to exploit these types of vulnerabilities, then why not better understand what these exploits are, how they work and how they’re being exploited.

I take the sort of the ACLU, free speech, kind of, more information is better. I do understand that there are situations, and there are times in which that logic may fail me, but nonetheless that’s how I feel. But it’s not the exception that makes the rule.

Lindsey: I think part of the issue is that it really depends on the situation and there’s not really any sort of standard like, you know, 90 day disclosure policies that exists kind of in the wild west out there right now for the security research community at a broad level. And for instance, look at the exploit code that was released for that Citrix remote code execution vulnerability. There’s not even a patch for that yet, I think that they’re going to roll out patches like later in January. So like the fact that in my opinion, the fact that someone decided to release a PoC exploit for that is going to make everyone’s life a lot harder. But if there is a vulnerability that’s been patched and out there for a while, then yes, I can see more of the side of like, “here’s the exploit, here’s how it works like this is kind of interesting research and provides further research and education for the future.” But it really does depend on the situation.

Tom: Well, yeah, I will say that in a situation when you have a zero day or you have an unpatched vulnerability, I could make an argument that it is irresponsible and you know the disclosure of a PoC  might be better suited for a bad channel as opposed to a chest-beating researcher who just wants some fame and maybe not so much fortune.

Lindsey: Yeah, well I mean speaking of vulnerabilities, Tara you had some really great reporting on a critical flaw that was disclosed this week. I think it was called Cable Haunt and it was in multiple cable modems that are used by ISPs to provide broadband into homes so you know what’s going on there?

Tara:  Yes, this was really interesting. So Cable Haunt is called that because it’s haunting all of these modems across the world. And so basically most modems are built on reference architectures, right? So you have basically, you know, reference software that’s pioneered by a couple of different vendors. One of the big ones in the field is Broadcom and so they have a reference architecture, that’s akin to using open source code reuse, basically, in the cable world, however, it’s not open source, but they can take that, license it and then use that as the basis of, for the firmware, for these cable modems, right. So a lot of different vendors of different cable modems, like Arris and Technicolor and some of the other you know, well known names in that particular side of tech industry have taken this Broadcom reference architecture. And unfortunately, there’s a big giant bug lurking around in there. And so every modem that has been built on that reference architecture in the hundreds of millions worldwide, possible topping a billion are at risk of the vulnerability.

Lindsey: So how did like Charter and Comcast and some of the other ones respond? Did you reach out to them?

Tara: Yeah, I did. And you know, it’s interesting because the original research was just focused on European ISPs. And so researchers were saying that now the ISPs are frantically trying to address this. And they’re working with their cable modem vendors to see what they can do about patching. And I did go to try to see what the exposure was to us in the U.S. and all of the cable companies that I reached out to, with the exception of Comcast got back to me, some of them did not want to go on the record, so I couldn’t include their comments, but let’s just say that all of them are aware of it. They’re a little freaked out about it from what I can tell and they’re they’re working with their vendors to try to scramble and find a patch for it before attacks start to appear in the wild. And that’s the point too is that so far there haven’t been any attacks in the wild that anybody knows about. So there’s still a window here to kind of get ahead of that.

Lindsey:  What would an attacker be able to do if they could successfully exploit the vulnerabilities? Is it being able to just take full control over the modem or what would they be able to do?

Tara: Yeah, so that’s a good point I probably should have mentioned before. So yeah, so basically, it allows and, you know it’s not it’s not a trivial attack, it definitely takes some sophistication to make this happen. But, you know, if successfully exploited, an attacker would be able to get full control over a modem inside the home, which, you know, is responsible for setting up the Wi-Fi network, right. So once you’re in that modem, then you can see all of the traffic that flows back and forth. You can see all the traffic to the different devices that are attached to the Wi-Fi network. You can pivot and drop malware on laptops and other computers that are attached the Wi-Fi network, you could get ahold of IoT devices like Nest thermostats and Ring doorbells and things like that, that are attached to the Wi-Fi network and enslave them to a DDoS bot. I mean, you can wreak all kinds of havoc, basically. And then you can also replace firmware. So you could start, for instance, and this is purely speculative, but you know, you could, for instance, provide rogue video feeds into the home if you wanted to, you know, to the TV and things like that. So it can be, you know, pretty havoc wreaking should anybody decide to mount an attack.

Lindsey: Well, that’s really interesting. And I thought it was interesting, too, that this all kind of originated in the reference software that was written by Broadcom, but then that was copied by all these other cable modem manufacturers and used in devices’ firmware, so it’s almost like spiraled a little bit in terms of that.

Tara: Yeah and we’ve seen that before with code reuse, like, particularly when it comes to open source projects. So you know, you have a distribution, or you have an open source project, and then all of a sudden all of the distributions that use it are suddenly vulnerable. You know, so this is akin to that, it’s a similar.

Lindsey: Yeah. And you had another interesting story to about the Joker malware, which great name for a malware. But Joker was found in like a ton of Android apps that were on Google Play. What was kind of the research behind that, were they focused more on how Joker was getting onto Google Play or how it was proliferating across the the Android platform?

Tara: Yeah, so this this malware, it’s basically a billing fraud malware, so it pretends to be an Android application, right? So if you download it, you put on your phone, in the background, it signs you up for premium SMS services or it starts charging things to your phone bill, etc. so it’s a fraud app basically. But what’s interesting about it is that it’s operators are submitting everything, kitchen sink style on Google Play to try to work their way onto that platform. And so, you know, Google has really strong defenses in place to weed out, you know, fraudulent apps and things like that. The Joker authors, they’re just trying to stay one step ahead. And so they’re using every obfuscation technique that they can think of. They’re innovating all the time. And Google was saying that, you know, to date, since the Joker appeared, which has only been like, I think, like 18 months or something, they removed 17,000 Android apps that have been infested with Joker. A lot of these apps were removed actually before downloads happens. So that’s a good thing. But that basically these guys are all about the volumes. Sometimes it can be like, you know up to 23, 24 different apps that are submitted to Google Play, like in a day that are Joker apps that are trying to like skirt in there. And so it’s just there a full front offensive against Google Play, it’s kind of impressive, just the sheer scale of this operation.

Tom: It sounds a little bit like whether or not it’s an automated process or something along those lines. I mean, with that kind of volume, being able to pepper you know, the Google Play defenses looking for weaknesses on such a regular basis. I wonder how automated it is or whether or not every every app is handcrafted to attempt to sneak by.

Tara: Yeah, I would assume there has to be some kind of automation behind there, either that or else they’re just making so much money they can afford to employ like a bank of developers that are just constantly there to tinker, but who knows? It’s crazy though.

Lindsey: Yeah, in terms of, you know, Joker being kind of billing fraud and being based on that, how is that similar or different to, because we also this week wrote about “Fleeceware apps” which essentially are kind of trick users into subscribing to a service on the app that could you know, be also used as free and then you know, ended up ending up racking like tons of money from victims so you know, it that similar or different from that?

Tom:  well, the Fleeceware apps, we wrote about it, they came in like two forms. One form was apps that that were similar in nature and function to a recognizable app. So you were tricked into thinking you are downloading and there’s just an example, WhatsApp, but you were downloading something that was looked like WhatsApp, but wasn’t WhatsApp. And then you didn’t realize it until it was too late. And they sort of loaded you up with ads, or did other nasty stuff. And then the other Fleeceware was where, it’s like the free trial, they say that you’re going to be able to enroll in a premium service. And then you’re not going to be charged for a week or you’re not going to be charged for a month. And then after that trial period, they just fleece you, they basically make it impossible for you to unsubscribe from the service. And even when you can unsubscribe from the service. They don’t actually unsubscribe you from the service, and you pay and pay and pay, and sometimes it’s through the nose, like exorbitant fees, like $20 a week or something along those lines, for a service that you kind of sort of never wanted to begin with. Yeah, I mean, Google Play is, it’s the trade off between having an open ecosystem or a walled gardens such as iOS, you know?

Tara: So what’s the remediation there? If you find that you’re a victim of one of these fly-by-night fleeceware apps,  how do you fix it, or can you?

Tom: I can’t speak to the research on what the mediate remediation is, but I would assume that there are financial institutions that aren’t going to support that type of billing and that type of shady behavior, especially from shady ads. I mean, it’s one thing if you sign up for for something, and I don’t want pick on anybody. But if you sign up for something, that shows up in your mail, that you’re a recognizable brand name, and you get a free trial. I mean, you could call up your credit card company, you could call up whoever’s wherever the money’s coming into your bank and say, “I was ripped off.” And I think that’s probably the best way to get your money back, is by making the case that you were deceptively snookered into a free trial that was deceptive from the get go. I mean, I feel like that’s the the consumer-level solution. But you know, in the fleeceware story that we posted, Google once again did the right thing and removed those apps.

Lindsey:  Yeah, I mean, I gotta say it’s so easy for apps to be able – even legitimate apps – to be able to kind of slip something in there that you know, if you accidentally click on something now. At least on, you know, iOS apps, they’ll basically ask, do you want to purchase make this add on purchase or something and then you know, they’ll scan your face using facial recognition or whatever. And it’s just so easy to accidentally purchase something or if the app is being malicious to launch that type of attack.

Tara: That’s a good point, Lindsey, it’s kind of scary.

Tom: You know, it gives me pause, I have two teenagers and they ask me if they can buy apps. And I’m super careful about what I do because, my money’s tied to these ecosystems. But I really get worried about my kids, you know, and whether or not they’re going to get themselves into a situation that’s going to cost me money.

Lindsey: That could be bad. I mean, you hear the stories about parents going on iTunes and seeing you know, millions of dollars in purchase. Not to freak you out Tom.

Tom: I’m pretty paranoid to begin with.

Lindsey: Well, anyways, I think there is one final story that that we reported on this week. Actually, Tom, you reported on it. And it was Google setting a two-year deadline for dropping support for third party tracking cookies in Chrome, which is a little more of a kind of a positive story there. So what was that all about?

Tom: Oh, yeah, well, we like good news stories. And Google, while they did not shine in the app department, what we like is the fact that they’re getting rid of tracking cookies. Tracking cookies are the way that advertisers can track you down and kind of create these little digital dossiers on what your online activities and your habits are and what they’ve said in the next two years, we’ll have to wait and see. I mean, this is a story that plays out over the next two years, is that they’re going to phase them out. And what they’re going to introduce is their own new technology that they’re trying to create a industry standard on, which is this Privacy Sandbox. I’m a little hesitant about industry standards pushed by one vendor, predominantly so, but you know, God bless them if everybody wants to play ball. But this would still allow advertisers to target users, but they would target users in aggregate. So there would be ways in which users were identified, but not at the cookie level, more or less by behavior level, and then an advertiser would be able to say “I’m interested in reaching this type of person,” and Google would say “No problem. Let’s reach these types of people.” It’s actually much more complicated than that. That’s an oversimplification. It is an effort that’s going to take two years to come to fruition. We’ll be watching it carefully.

Lindsey: I love kind of hearing all these new ideas about trying to balance the advertising side online, but then also having greater user privacy too. And that also gives a lot more kind of choice and control for users about how their data is being used. So I love to see that. Anyways, great news wrap and lots of security stories on Threatpost this week. For all our listeners, be sure to visit our website and view any stories that we may have missed. There were definitely a lot with Patch Tuesday and everything else happening. So Tom and Tara, thanks again for coming on to the news wrap today.

Tara: Thank you.

Tom: Thanks so much, Lindsey.

Lindsey: And for all of our listeners, catch us next week on the Threatpost podcast.

 

Suggested articles

The State of Secrets Sprawl – Podcast

In this podcast, we dive into the 2022 edition of the State of Secrets Sprawl report with Mackenzie Jackson, developer advocate at GitGuardian. We talk issues that corporations face with public leaks from groups like Lapsus and more, as well as ways for developers to keep their code safe.