Threatpost editors break down the top news stories for the week ended Oct. 25. The biggest stories include:
- An unsecured NFC tag opening a door to trivial exploitation of robots that are used inside Japanese hotels.
- The FTC has banned the sale of three apps – marketed to monitor children and employees – unless the developers can prove that the apps will be used for legitimate purposes.
- Developer interfaces were used by Security Research Labs researchers to turn digital home assistants into “smart spies.”
For the full podcast, listen below or download direct here. A transcript follows.
Below find a lightly-edited transcript of the Threatpost news wrap podcast.
Lindsey O’Donnell: Hey everyone, we’re back with the Threatpost News Wrap podcast for the week ended October 25. You’ve got Lindsey O’Donnell Welch here today with Tara Seals. Hey, Tara, how are you doing today?
Tara Seals: I’m good. How are you?
LO: Good, good. And today we’re going to be talking about the biggest news stories of this week that we wrote about. And we wrote about a lot from fingerprint sensor glitches in the Samsung Galaxy S10 to malicious apps being removed from the Apple App Store. But the one that really stuck out to me just from the get-go was something you wrote Tara, which is the bedside hotel robot hacks that occurred – that was kind of funny to read.
TS: Yeah, this was a fun story to write. But, you know, also terrifying I have to say because you know, obviously the the potential for voyeurism and being spied upon inside your hotel room is not a good thing. But they have these hotels in Japan – of course in Japan – that are entirely staffed by robots. And so when you go to check in, they have a robot front desk that does a facial recognition type of thing to make sure that you’re the person that’s supposed to be there, who’s listed on the reservation. And then once you check into your room, you’ve got an in-room bedside robot that actually takes care of a lot of the concierge services. The bedside bot was found to have a pretty trivial bug inside – super easy to exploit having to do with the near-field communication (NFC) tag that it contains that is unprotected. And that somebody who is physically there basically can tap into, and install a remote streaming service URL that has the NFC tag attached to that. So when they left the hotel room, they would be able to tap into that and see everything that goes on in the room via the hacked bot.
LO: Yeah, that’s definitely a little creepy. For sure.
TS: Yeah, and it’s great to have these sort of future-forward interesting hotel experiences. The one that that the researcher actually was able to hack is in – this hotel chain actually has 10 different locations throughout Japan – and the specific one where he was, was close to Disneyland Japan. So you know, it’s kind of a big tourist thing because it’s kind of cool, “Hey, let’s stay the robot hotel,” and they get a lot of traffic being adjacent to Disneyland and yeah, so it has the potential to hit a lot of a lot of different people.
LO: I think one thing that stuck out to me was that the researcher said that they’re dropping the zero-day and they had notified the vendor 90 days ago – and the vendor didn’t care. So you know, the vendor is separate from the hotel here, right? I mean, what did the hotel do, and is the vendor continuing just not to take any action?
TS: Right, so this is a classic Internet of Things, IoT, lack of security-by-design issue. The vendor is a company called Tapia. And they replicated the researcher’s efforts, apparently, but they said that they feel that the risk of unauthorized access is low, which is contrary to everything that the researcher himself found. And also I talked to a couple of outside people that confirmed the same thing, so its a real exploit even though the vendor says that it’s not. So they haven’t they haven’t done anything about it, which is, you know, kind of disturbing. However, the hotel chain itself, they have [implemented a mitigation] and they’ve apologized for any “discomfort,” as they called it to hotel guests.
LO: Yeah, you know, it’s interesting, the story made me think of Black Hat this year, there was a really interesting session on IoT smart locks that were in a high-end luxury hotel in Europe. And the researchers who held the session said that they were able to remotely unlock the locks because they had vulnerabilities in them, and because they were IoT, and because of the inherent security issues, there was kind of a messy disclosure process that they needed to go through that was similar to this. And what was interesting was that the hotel ended up getting stuck in the middle of all this… and it just ended up being really messy for the hotel. So I don’t know if the takeaways here for hotels are, you know, be careful with these types of tech products, but the hotel chains almost always are the ones who are getting hurt in these types of situations, it seems to me.
TS: Yeah, that’s a really good point. And it brings up the fact that this is sort of a supply-chain issue, right? And so from a shared-responsibility perspective, where does liability fall? Are hotels responsible for vetting all of their IoT devices that they decide to put in for guest convenience? I think a lot of people would say yes, they are. But again, that’s not their core business. And they probably don’t have anyone tasked on staff to perform that action. So it makes it really difficult and just brings up the fact that there are organizational and institutional issues going on that make IoT security very difficult and consumers unfortunately end up bearing the brunt of a lot of these things.
LO: As a consumer, I’m already being cautious about trying to have these types of IoT devices in a personal space. We actually wrote an article earlier this week to about the whole Echo and Google Home eavesdropping hack. And one of the big takeaways from that was that these devices shouldn’t be in private places — you know, obviously not in your bedroom or even in boardrooms of companies and things like that. But when when you’re visiting a hotel, and there are these types of connected devices … well now I’m just going to be paranoid.
TS: One of the one of my favorite researchers, Chris Morales at Vectra, he told me, “I would just throw a towel over the head of that thing.” Just you know, cut off its eyes, blindfold it.
LO: No, no, I would definitely do the same thing.
TS: So speaking of spying, and voyeurism and other unsavory things like that, you had a really interesting story about the FTC trying to crack down on stalkerware.
LO: Yeah, the story was really interesting. The FTC this week announced that it was barring the sale of three “stalker” apps until their developer could prove that they are used legally. And this case was really interesting because it’s the first crackdown by the FTC on stalkerware, which, as I’m sure you know, is software that can be installed on devices to essentially stalk or spy on their owners. These three apps came from a company called Retina-X Studios and, I went to their website and checked it out. They provide software which they market as being monitoring software for either employees or children. And so there were kind of two parts to the FTC ban. First of all, Retina-X went through a data breach twice in 2017 or 2018 – they’re not sure. So there’s the security aspect there. But then there’s also the notion that this is essentially stalkerware and they really have no way to show that it can be used legally, which I think brings up really important kind of issues around this type of spyware. Because, you know, at this point, a lot of these types of spyware solutions are being marketed in the same way, which is “oh, you can use it to monitor your children or your employees,” but who’s to say that it’s not going to be used illegally?
TS: Yeah, that’s what I find kind of fascinating about this. Because they are being marketed to be legitimate, helpful services. So, you know, “parents, you want to keep track of your kids.” Even though I would argue that there are other ways to do that. For example if you have an iPhone family, then you can do a Find My iPhone thing to know where your kids are. But, you know, from an employer standpoint, I don’t know how I would feel if our employer decided to put an app to track our whereabouts on our phones. I guess, conceivably, there are legitimate uses for this. But clearly, there are unintended consequences, because people will see an opportunity to use it in more nefarious ways. So I think that it’s appropriate to take a look at that.
LO: And I think too you bring up a good point, that even for these quote-unquote legitimate purposes that it can be used for, which is monitoring employees, monitoring kids — in my opinion, that’s still completely inappropriate. And I just think it’s kind of creepy. Even for your own children, in my opinion, at least. But I’m not a mom yet, so I don’t know.
TS: I mean, it depends on the age of the kids. But I think that individuals, as a basic human right, should have some expectation of privacy. I mean, at the very least, they should know that these apps are on their phone, you know, they shouldn’t be installed surreptitiously.
LO: Right. And we know that the FTC, in order to uphold that legitimacy…is going to require that people who purchase their apps state that they’ll only use the app to monitor those two specific types of people, children and employees — or another adult who has provided written consent. And then in addition, the apps must also now include an icon with the name of the app on the mobile device. So at least you can see that it’s [installed]. And then, in terms of the security breaches, the company also needs to implement new security measures, like third-party assessments, etc., etc. So, I mean, I’d be curious if first of all, if that’s going to be enough to uphold this as a legitimate monitoring service as opposed to be used as illegal stalkerware, and it sounds like it might but I’m not sure. And then second of all, I’d be curious too if there are going to be other measures taken against other similar types of monitoring solutions that can be seen the stalkerware because I do think that it is important that these types of measures be taken. There is just such a privacy fumble there and I really think that it’s important that other people kind of recognize that this is an important measure that needs to be taken.
TS: No, I totally agree. And you know, what’s kind of scary, just to add a little bit context here, Kaspersky actually put out a report earlier this month talking about just instances of stalkerware that they’ve seen in the Android ecosystem, specifically, and there’s just been a sharp rise. I mean, some of these percentages are a little bit scary. I’m looking at it right now and it says that the first eight months of 2019 have seen a 373 percent increase in stalkerware detections, which is just kind of incredible – that’s versus 2018, the same period. And then also, same thing, a 35 percent increase for people that have ran into it at least once, who are aware that they ran into it.
LO: So right, you think of all kind of the malicious uses that this can be used for. Whether it’s at a high level, like journalists — governments installing this on journalists’ or protesters’ mobile devices or whatnot, and then at more of a personal level, even for domestic abuse, or actual stalkers being able to use this type of software, so definitely a lot of malicious purposes out there. And it’s very disturbing that it’s increasing in that way. Hopefully, there will be more similar FTC measures; we’ll have to see for sure.
TS: Yeah, absolutely.
LO: Great. Well, there were definitely a lot of big stories this week. And Tara, thanks for coming on today to chat about two of the biggest ones that we saw.
TS: Well thanks so much for having me, Lindsey. I always enjoy our discussions.
LO: Yes, for sure. And have a great weekend. Everyone else catch us next week on the Threatpost News Wrap.