Chris Vickery: AI Will Drive Tomorrow’s Data Breaches

Chris Vickery talks about his craziest data breach discoveries and why “vishing” is the next top threat no one’s ready for.

From malicious hacks to accidental misconfigurations, Chris Vickery has seen it all. But as cybercriminals continue to innovate, Vickery, the director of risk research with UpGuard, said one emerging security threat will “blindside” the world: “fakeable” voices. More bad actors using artificial intelligence (AI) will create copycat voices of a trusted family member or executive, he said – and they then call individuals – and even enterprises – and scam them out of money or valuable data.

Vickery also talks to Threatpost about fringe data breach discoveries he’s encountered over the last few years, as well as how the process of data breach disclosure is shifting and the best first steps companies can take once a data breach has been discovered.

Find the full video interview with Vickery below, or click here. 

Below is a lightly edited transcript of the interview.

Lindsey O’Donnell-Welch: Hi, everyone, this is Lindsey O’Donnell-Welch with Threatpost. And I am joined today by Chris Vickery, the director of risk research with UpGuard. Chris, thanks so much for joining me today.

Chris Vickery: Thank you for having me.

LO: Yeah. So just for all of our listeners, Chris works at UpGuard, and he has a great track record of discovering major data breaches and vulnerabilities across the digital landscape. So we’re going to have a great discussion today about kind of data breach disclosure and the process of finding data breaches. And some of the biggest trends that Chris is seeing in the data breach landscape. So, Chris, just to start, you know, the last time we talked to you, we were talking about the concept of what a data breach is, and you were mentioning that, you know, there’s kind of this concept of data breaches being solely you know, hacks by malicious actors. But that’s not really the case anymore, is it? I mean, I feel like so many of these data breaches stem from exposures from misconfigurations from accidental types of situations. What are you seeing there?

CV: That is true. There is a common misconception in the world of network in cybersecurity that a data breach equals a hack, a malicious, bad guy, you know, in a hoodie, and a keyboard doing something wrong. That’s a misconception because there are plenty of malicious hacks as well as a much larger amount of non-malicious data breaches that are not necessarily being done on purpose or happening due to malicious things, that just are mistakes that people make, or somebody accepting a risk that they shouldn’t have accepted. And people are starting to get the difference. I’m seeing fewer articles that just default to the term of hack hack hack, this was a malicious, evil thing that resulted in a data breach, or company XYZ was breached. It doesn’t work like that. It should be framed more of company XYZ experienced a data breach. So it isn’t necessarily that an outsider breached their defenses. Although that does happen. It’s equally as bad if an insider decided, ‘Hey, I’m going to cut a corner.’ And they did something that exposed information publicly, or they made a mistake and didn’t put a password or a username because they didn’t understand the software as well as they could have. Still, it’s a data breach because it exposed information to the public internet. And people are starting to get it more and more, there is a distinct difference between the two.

LO: What’s kind of the craziest type of incident that you’ve seen?

CV: One of the more impactful and noteworthy, in my opinion, kind of exposures or breaches or whatever you want to refer to it as, non-malicious findings that I came across in my work currently with UpGuard, is one that that was covered originally by by TechCrunch. And it didn’t really spread very far from there. I don’t know why, it was talked about a bit. But we came across the entire communications infrastructure, telecommunications infrastructure for the Russian Federation, the entire nation.

We’re talking about VPN passwords, every satellite and every antenna – like pictures of physically, people have walked up taking pictures, done full audits of the entire telecommunication, communications infrastructure. Because a Nokia employee had taken home a hard drive, plugged it into the Internet, and apparently didn’t have any sort of firewall between it and the public Internet. And I came across it and downloaded about 1.6 terabytes of information going so far as to talk about the Russian Ministry of Defense. And the FSB, their version of, I believe the FBI, their their Bureau of Investigation. They have a system called SORM that allows them access to all the communications data centers, and the ISPs is not allowed to access these special SORM boxes that are in their data center. And the documents, the planning, the access credentials, all that, was all here. So it was like, Oh, my God, this is a nation that’s fully compromised now due to this, this data exposure, and it didn’t get as much attention as I thought it would.

LO: That’s pretty insane. I’m curious when you discover those types of issues or incidents, what is the process from the beginning to the end that you need to go through? Are you looking specifically for those types of incidents? Or do you kind of stumble upon this or what’s the start there?

CV: It’s a stumble upon thing. Mostly we are not trying to hunt down any particular entity or choose targets necessarily. There are plenty of enterprise level clients that we have, that we have specific agreements that we will watch over their stuff, but we don’t go looking for other things based upon what they want us to look at, like as far as other entities go, we look over their stuff. But then in the research side of things where we seek to raise public awareness and get people and the general public to be more knowledgeable about the prevalence of data breaches, we will just look randomly and see what we discover. And when we come across something really noteworthy, we’ll write up reports and we’ll find new media that are working in that space that will be conducive to raising public awareness. So I’ll generally download at least a representative sample if not all of it. That’s the first step after something’s notified, or noticed as being open to the public Internet, and downloadable, and we used to just download a representative sample out of time concerns and other things. But it is become more necessary to download as much as possible, because we’ve come into situations where like, the CEO, doesn’t believe his own tech staff. So he wants to get a copy of what was exposed and comes to us afterwards saying, Hey, you guys found this and notified us and did the right thing. You also have the ability to show me what really was exposed because my team is not being honest with me. Then there’s the regulator’s that get involved that have the same concern but of the entire company, not telling them the truth, versus what really was the truth. And then there’s the concern of ‘Okay, if there’s any sort of litigation that comes out of this, is there a legal duty,’ I’m not an attorney but there is a legal concern where if somebody says in our report, you said XYZ and that’s not true, then if we have it all, and we have our analysis and everything, then it’s a lot easier to say, ‘hey, guess what we said was true.’ And here’s, here’s the evidence.

LO: That’s a lot of layers, there is a lot to be concerned about.

CV: Yes, and of course, after we have downloaded and, and looked at it and realize, hey, this definitely is something that probably should not be exposed online, but is right now due to whatever circumstance, we don’t know why it is. Let’s decide on who we believe the entity is that is exposing it, whether that comes down to the contents of the data, or where it’s being hosted or a number of other factors. Sometimes there’s even a phone number within the data set that I can call to reach somebody that I believe is responsible for it and that’s the easiest way really, and notify them. We’re very clear and have a boiler plate attorney approved email that we involve that is very clear in stating, ‘hey, this is not a sales pitch, we’re not asking for anything in return. This is just us saying, Hey, this is exposed. If it is, if it is under your control, you should probably secure it.’ And usually that works out pretty well. It gets secure pretty fast. And if it’s something of note and impactful, and we believe we’ll look to any sufficient degree in our minds raise public awareness of the problem and prevalence of data breaches, we’ll generally find a media outlet that is people that we worked with in the past or is interested in this space or is specifically or uniquely qualified in this area, and show them our analysis. And if it’s been enough time that we believe the company has had a chance to A, secure it, because we don’t want it to be a bigger problem than it needs to be, we give them a chance to notify affected entities and comply with any laws that are relevant. And then we’ll release our report and have, you know, the whatever we’ve gone to with a media partnership. Usually their article comes out the same comes our report.

LO: Right, yeah, that is there’s a lot of parts there. I’m curious if you’re seeing that, that process get even crazier, just with so many different pieces of the supply chain, or like, you know, in the case of Target would like, say it’s stemming from an HVAC third party or something or like, even with IoT devices, like if you have database exposed, like we’ve got different parts there, you’ve got a manufacturer, you’ve got like the software or the application, things like that. Are you seeing that kind of make the entire data breach disclosure process a little more difficult or is it kind of still the same as before?

CV: It could complicate things quite a bit if you allow it to. We have done a lot of thinking and talking and planning around that type of situation. And our view on it is generally, if company XYZ is a vendor somewhere that hosts the inventory of a big tech company, and we come across a database that is containing data about company A, but really hosted by company XYZ, then we’ll generally try to notify company XYZ originally, if at all feasible and reasonable, if possible, because there’s probably more than company in that database. And we don’t want to go down the rabbit hole of making it seem like we have a duty to let all of their customers know individually that may be affected. That sprawls into a huge exponential problem that A, we don’t have the people to do. And I’m not sure enough people in the world can deal with that cascading cascading problem. And it’s really up to company XYZ that is responsible for the exposure itself initially to then notify their customers down the line and we hope they did the right thing.

LO: Let me ask you this, what would you say is the say, you find a database or like there’s there’s been a data breach or something within a company, what would be the best first steps for that company to take moving forward once they have found the data breach? You know, in terms of what they can do, from a security standpoint, from a risk standpoint, from a communication standpoint, in all the companies you work with, like what have you seen as being the best practices?

CV: What is the scenario you’re talking about, where they received a report from an outside party or where their internal teams or systems have discovered that data has been exposed?

LO: I would say, third party like say, you know, a security researcher.

CV: Somebody like me makes a phone call or email saying, hey, yeah, you know, it’s exposed right here. Well, in that situation, I think the best response initially, would be to activate whatever protocols you have ahead of time. Hopefully you have a incident response protocol in place where everybody that is involved in the response knows their roles ahead of time and is not left questioning, okay, what, who’s doing what now, and hopefully once you initiate that process immediately, you can mitigate further exposure, which means close off the exposure. If it’s like a cloud file repository that can be as simple as clicking a checkbox off that says make this publicly accessible. You know, and hopefully that is as far as you need to go on that end of things to shut down the exposure because at that point once you once you are notified, and you are aware as a company officially that you are exposing data, it’s not good from an ethical, moral or legal standpoint, as far as liability goes to continue knowingly allowing public access to that data if it’s not supposed to be publicly accessible.

Once you shut down the problem, the initial problem, the exposure, I would probably reach out to whoever reported it and ask them any extended questions you may have that weren’t covered in the notification, like ‘are you aware of anybody else coming across it? How did you find it? Are there any additional details that you have to share about it? that weren’t covered in the initial email or notification. Did you download any of it?’ Not that that would be wrong. But assessing, you know, if there are copies out there because it’s completely possible that the third party is a malicious actor. And if they have copies of it, and that is their intent to act badly, then you have to understand the situation.

Somebody like me would say, ‘Yeah, I did download and analyze copy. And that’s the truth of it. I didn’t do anything wrong in doing that But yes, that is something to keep in mind here.’ And hopefully at that point, the company can also have somebody on their team. And this is the best case scenario they have logs up every single packet that has gone in and out of the network, over the period of time which they can show the data was exposed. And the best case scenario is they can show that absolutely nobody else outside of the organization or whatever downloaded any piece of that exposed data, it’s still an event that needs to be reported and handled correctly. But as far as the reality and the truth of the situation goes, if you can show and you’re confident that it did not get downloaded by anyone else, that really is a lot of assurance, and helps people sleep better at night. It’s not enough to simply say we don’t have evidence of any other parties downloading it. Because if you don’t log anything, you don’t have evidence. That’s not enough when when companies say that they’re being deceitful, and they are not being not being good corporate members of the world. And then this whole time, I would be following whatever relevant laws or regulations are there if you are under GDPR, and you have a, you know, 72 hours or 48 hours – I think it’s 72 hours – to notify the appropriate regulatory authorities. Go ahead, get your legal team started on that note, when appropriate, because sometimes there are requirements that you hold off on letting the public know about it. Because there’s maybe if it is a bad actor, law enforcement wants you to not say something publicly due to an ongoing investigation.

With that in mind, you should maintain the public’s trust in your company by as soon as possible, stating what is known as true and will never not be true. What I’m saying there is don’t say we were hacked, if you do not know that it absolutely was a malicious, exploitative hack. If it was a simple data exposure, and your internal team was responsible for exposing the data, and you say it was a hack, and it’s proven later that it was not actually a malicious hack, you totally destroyed your credibility. So in order to maintain that integrity and credibility, when you say something publicly about it, or notify the public only saying what is true at that time, that you know is true and cannot and will not not be true in the future. So if whatever you’re saying, if you say something outside of that, you’re taking a huge risk. And as more things are known as true that cannot ever be changed as true, include them with updates.

LO: Seems like there’s a lot that would go into that from a company perspective or from an enterprise perspective.

CV: It depends on the size of the company. But yes, a data breach scenario is not a good thing. It’s very disruptive, and has a lot of problems entailed in it. So that’s part of the reason why people need to realize how serious the problem is. And our whole cybersecurity and software development ecosystem needs to change to be preventing these things from the get go rather than adding on security at the end.

LO: Right, that’s a that’s a really good point.  Before we wrap up, I want to ask you, is there anything, any data breach trends that we should be on the lookout for? Looking forward, anything you’re seeing that is noteworthy from, you know, a data, accidental exposure or data breach viewpoint or anything from your vantage point that is probably sticking out to you.

CV: Something that I brought forward as, at first, a hypothetical theoretical thing, but more and more as becoming relevant that is going to blindside the world is advances in voice cloning in the sense that my voice if I call up my mother, or my brother, or somebody that I’ve worked with in the past or work with as a colleague now and they recognize my voice, even if I’m calling from a strange number if it’s feasible that it’s me and they recognize my voice and I know a few details that are you know, that only I would know apparently, they’re gonna trust me, even if I’m calling for a strange number, I could have a story that my phone died, I borrowed this phone in the office next door or something, you know, they will tell me all sorts of private details, passwords, potentially other things over that phone call. Just because they recognize my voice. That’s the bottom level trust thing.

And people need to realize that voices are becoming fakable, they’re not infallible like they used to be, you should not simply spill the beans because you recognize somebody’s voice. And I would assume that the big telcos have already mastered this in one way, shape, or form and in their research labs, but more and more, the common place miscreants are going to gain access to knowledge they can input, lots of snippets of somebody’s talking, and then have the ability to real time alter voices, to imitate that voice. It’s like the deep fakes concept that we’ve recently seen in the media about videos of people doing things they didn’t actually do simply because computers can take a video of somebody, learn about it, and then recreate it on top of a different video. It’s gonna be the same thing with voice.

And when bad actors start using this more and more and more, there’s only been one or two articles I’ve seen of people actually using it to phish details, but it’s gonna blindside the world and a lot of people are going to not see it coming and it’s gonna do a lot of harm.

LO: Yeah, that’s a really good point. I remember I think it was like a year ago, there was a BEC scam where, you know, someone called an employee and pretended to be the CEO or someone with a C suite and ask them to make a wire transfer. It wasn’t the CEO and they, they made the transfer and you know, lost a lot of money. But I could totally see that even at a more kind of macro level, like, you know, if your relative calls you and is like, hey, look, I’m trying to log into our Netflix account like what’s, what’s my password again? So I could see that happening for sure.

CV: And I remember that article you’re referring to, I believe, and they took it to another level where the scammers knew that the CEO had an accent, you know, was from a country originally, that the employee they were talking to was not also from. So they took advantage of the fact that if your CEO or your boss has a slight accent from a, from being born in a country that you were not born in, you’re less likely to call them out as, ‘Hey, is this really you? I mean, I think I recognize your voice but the accents is throwing me off.’ You’re not going to do that that would be kind of insulting way. So it’s even more likely to succeed in that scenario. I’m not a researcher in that in the socio sociological aspects of things. I don’t want to get that impression, but it seems like common sense like that would be a natural entry level point.

LO: Right, yeah, very tricky. Well, Chris, thank you again for coming on and talking to us a little bit about what you’re seeing with data breaches.

CV: All right. Thank you for having me. Great. And to all of our viewers. Once again, this is Lindsey O’Donnell Welch with Threatpost talking with Chris Vickery with UpGuard. Be sure to subscribe to our channel and visit our website, Threatpost.com.

 

Suggested articles

Discussion

  • Sean Harris on

    "And people need to realize that voices are becoming fakable, they’re not infallible like they used to be, you should not simply spill the beans because you recognize somebody’s voice. ” - After the initial hype about technological developments in voice conversion, more and more people have started to also take seriously the downsides, namely, the possibility of making people believe that someone said something they didn't. It's as if we got to realise how problematic the "fake" part in the word "deepfake" might actually turn out to be. I liked Chris Vickery's strategy of empowering the individual with the responsibility to not fall prey to malevolent fakes. It is the other side of institutional responsibility for voice conversion. [promotional content removed] This shared responsibility between the individual and the organisation that provides the voice technology is needed for learning how to deal ethically with novel, disruptive machinery.
  • Clayton Sengvilay on

    It's hard to say

Leave A Comment

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.