Podcast: Departing Employees Could Mean Departing Data

Threatpost talks to Digital Guardian’s Tim Bandos about the top insider threats that enterprises are facing today.

With so many malicious adversaries trying to penetrate companies’ networks, companies are forgetting to watch out for a dangerous threat from within their own ranks – insider threats.

Threatpost talks to Tim Bandos, vice president of cybersecurity at Digital Guardian, about the top types of insider threats that he’s seen – and how to prevent employees leaving the company from bringing important company secrets out.

For direct download click here.

Below is a lightly-edited transcript of the conversation.

Lindsey O’Donnell: Hi, everyone. Welcome back to the Threatpost podcast. You’ve got Lindsey O’Donnell with Threatpost here, and I’m here today with Tim Bandos, the vice president of cybersecurity at Digital Guardian. Tim, thanks so much for joining me today. How are you?

Tim Bandos: I’m doing great, Lindsey. Thanks for having me.

LO: Great. Well, we’re happy to have you. So today I wanted to delve into the topic of insider threats, which is a big issue for the security industry nowadays. But before we get started, can you tell us a little bit about yourself and your role at Digital Guardian?

TB: Sure, yeah. So I’ve been in the cybersecurity industry for about 15 years now, prior to joining Digital Guardian, I used to work on the other side of the fence for a global manufacturing company. I was in charge of data protection, insider threat, incident response, threat intelligence. And then after around 12 years, I moved over to Digital Guardian, and we’re a data loss prevention software company. And we focus on endpoint detection response and essentially data protection, so monitoring data movement, blocking malicious activity from a malware perspective, but really, the end goal is protecting data because ultimately, that’s what the bad guys or the adversaries are attempting to steal.

LO: Right, for sure. Switching gears and looking at insider threats, insider threats is a topic we’ve seen over the years and it’s kind of a driver behind several serious breaches and privacy scares from Anthem to Target. And what’s difficult is that when it comes to insider threats, that term really reaches kind of a broad range, it can stem from employees or third party partners. And the motivations behind these threats can be malicious, or maybe they can even be inadvertent disclosures. So, Tim, from your perspective, what are you seeing in terms of insider threats right now?

TB: Yeah, I mean, I absolutely think departing employees, I mean, they’ve always been a huge insider threat for organizations, I mean, they have the necessary access, they have the knowledge of where sensitive data resides. And at some point, they may even have a motive, right to move on from the company. And a lot of times they don’t leave empty handed. When you combine all those elements, I think, you’re having an organization that’s at a major disadvantage going up against this type of a threat, right, the insider threat, and as you mentioned, it could be an inadvertent disclosure of information, but most times, I think individuals just, kind of pack up when they’re leaving a company and move on. And they might not even think about necessarily taking that data to provide you a competitor. But often times, that’s what happens, unfortunately. So I think monitoring for that type of behavior and activities is critical today.

LO: Right. Yeah, for sure. And I’m curious too speaking of behavior, what are you seeing in terms of kind of the motivation behind insider threats? Because I’m curious if in terms of behavior, if this is something that employees come into companies intending to do, or if they are persuaded once they’re part of a company, or what kind of behaviors or actions or even just personalities do you look for when you’re looking for insider threats?

TB: Yeah, the software tech side, a lot of times we don’t necessarily look for psychological behaviors or the intention of employees, a lot of it comes down to forensics. Is data egressing the organization? Right, is it classified information, are those individuals leaving the company in a matter of a couple of weeks, kind of quantifying all that, putting it together, and then, putting that picture together for legal counsel or HR  prior to that individual leaving the company. So really what we try to do is kind of put together a profile that we’re seeing and make sure that’s factual, right, showing them what they have been up to right over the course of the last couple of months or even the last couple of weeks, so that they’re armed with that data prior to the individual leaving. But I do think intention is important, right? You need to focus on, okay, why are they leaving with data? And a lot of times you you might not be able to tell that until they have already left and they’re working for a competitor and then you ultimately find out they leverage that trade secret or technology into the product that they’re working, at the competitor location. So unfortunately a lot of times that happens after the fact right when a lot of companies identify there has been some sort of trade secret stuff but being proactive, as I mentioned earlier, is very important, and ensuring that data isn’t leaving the company, from the very beginning.

LO: Well, I think you do bring up a great point there, which is kind of the issue of departing employees, not even employees who are still working within the company. What kind of risk are you seeing in terms of departing employees as a threat for organizations? Can you talk a little bit about kind of how big of a threat that is, compared to different kinds of insider threats that we’re seeing across the board?

TB: I personally think it’s a tremendous threat for an individual leaving with trade secrets in hand or some sort of, R&D information, going to a competitor, because if you think about it, it provides a leg up for that company without having to spend any sort of investment dollars. So say you spend, for example, $20 million, or something on an R&D project to develop like a golf ball that you can hit further, you can now produce that same product and sell it at a much lower costs which would undercut the initial developer of that tech because you have to invest all that money upfront. So theoretically, you could put that company out of business. I’ve been involved in cases. And I’ve seen, specific examples where companies in the multi billion dollar type industry have had to spun off those organizations because they’ve lost that advantage in the marketplace. They’ve lost research and development to trade secret theft, and ultimately, they had to sell off that business because they can no longer be competitive. So, it’s hard to quantify the number. I mean, I know numbers have come out recently, it says a breach like this might average around $4 million in damages, including costs and fines, but it can be much, much higher than that, in terms of that, and even just, the impact of reputational loss as well.

LO: Right. That’s a really good point. And what’s interesting too is, you have the threat that you mentioned, which is kind of loss of research or the creative aspect there but then you also have these certain cases too where you’ve seen them in the news the past few years, you have disgruntled fired employees who are coming back to the firm and using their access into the firm and kind of a malicious way as well. So I think that you have kind of multiple threats from different types of departing employees and different kinds of case scenarios there. What can companies really do to mitigate this threat? Does it come more into the offboarding process or more into other processes that companies need to take on?

TB: Yeah, ultimately, I think there needs to be an approach where you’re pairing technology with process, I mean, first and foremost, you need visibility, on the endpoints and, and where data is even, you’re leaving or transferring across the company. So, the technology at a minimum should be able to track all types of file movement and data egress, and at least provide an audit trail of what each employee has been up to prior to departure. So when an employee hands in a letter of resignation, right as a part of the exit interview process, you could show up now with a detailed log of their behavior, along with serial numbers of USB devices they may have used to exfiltrate trade information.

I was a part of a case one time where there was someone from the legal department who would literally go out and track down USB devices where we knew, data was being exfiltrated to after the fact. They would run these reports and say, Okay, these are the media devices where he transferred information, he would show up at their doors and literally asked for those devices back otherwise, we would see you in court. So that’s, that’s kind of waiting a little too long, right? I mean, ideally, you’d want to do it during the exit interview process right prior to them even leaving the company. But sometimes it is good to actually track down information because as I mentioned earlier, it can be critical trade secrets and R&D information getting into the hands of a competitor.

LO: Right, other than employees potentially taking USBs or other types of equipment. Is there other telltale signs or red flags if an employee who has done something that might compromise data security that you guys look out for?

TB: There’s a few things right. So typically, we look for spikes in data movement volume, so large data egress to, USB type devices, egress to cloud storage sites like Dropbox, Google Drive, if an organization leverages, classification of a data loss prevention tool that tags files, right, with the level of sensitivity, we look to identify if there’s any sort of confidential information egressing, so just as an example of sensitive file is attached to an email, and it’s sent outside to a personal domain, like a Google Gmail or Hotmail account, we would flag an alert or an event on that, because maybe that falls outside of internal policies of a company, it’s just good to investigate and determine what was the intent of that individual sending that file, how classified was it if it were to get out, so  looking at all those elements and then more so over the last several years, really starting to leverage machine learning and putting that to work as opposed to an analyst trying to identify the trends on themselves. Machine learning can kind of also add this other element where baselines behavior for an individual or a computer over time. And then anytime something that occurs outside those normal activity realms it’ll flag and alert. So it’s kind of a multi approach, right? When we’re looking at data egress, it doesn’t always mean that large amounts of data leaving is something, that’s bad, it could be a backup device type scenario. And there’s even times when a single file leaving a company could have all the trade secrets for a particular business unit. And that could be just as critical. So it’s really applying protections around I think, your most critical assets and then monitoring for that activity, is all very relevant in terms of data protection.

LO: Right. I think your point to bringing in machine learning is important too, because that’s more of a proactive way. Whereas like other ways can be more reactive and trying to minimize damage already done. So I think that’s kind of a really important point as well.

So one other question I had was it sounds like you’ve kind of dealt with different types of insider threat incidents. I’m curious if you have seen any signs kind of ahead of the incidents before they happen that serve as red flags.

TB: Yeah, I mean, telltale signs too as well. I mean, we try to be as proactive as possible and we identify this and I think ultimately, the biggest hurdle a lot of times for us is the intent. But what we what we do also look for is if there’s any sort of large data transfer movements from shared file devices, so for example, in an organization, maybe a store, a ton of information on shared directories where individual employees can access that material, if you start seeing a lot of that eagerness to a particular machine, maybe they’re staging it for later offloading within a week or two or maybe they just directly transfer it to a USB device, but we look for those types of signs as well. Right. So data staging getting ready to possibly steal that data at some point. But other than that, it’s it’s very difficult to kind of stay ahead of that, as I mentioned, because you just never really know why an employee might be doing that. Right. What is ultimately their goal there.

LO: Right. Yeah. I mean, is there any insider threat incident that you’ve dealt with so far, or that you’ve heard of even that has kind of stuck out to you is something that’s really interesting, or maybe kind of indicates almost the future of insider threats or just poses a new challenge for enterprises?

TB: Yeah, so I’ve worked hundreds of insider threat cases, a lot of them actually kind of feel the same in terms of an employee leaves the company, they steal data, right – employee leaves, they steal data. But this one case is always kind of stood out to me, and this isn’t something that’s actually new. I don’t know if it’s the future of data theft, but I think it’s something that could possibly be going on, and a lot of companies and then just might not know it. But in this one particular case, we had an individual who had come over from another country, he was working in the United States, basically spent a year learning the IT infrastructure where the sensitive data resided. He wasn’t in IT, but didn’t actually steal any of that data. He was basically just gathering information and intel. And once he gathered all this information, he actually provided those details to a nation state threat actor group. And then one day a phishing email came in to like 10 people or so in the company and that individual “accidentally” clicked on the link, which provided backdoor access for the bad guy to come in. So bad guy came in move laterally went directly to the repository of where all the trade secrets were, and then attempted to exfiltrate that data. Now alarms went off, right for that type of behavior, right, we started to see the exfiltration, we were able to stop it, and for their attack, but there was all these breadcrumbs left behind, but that that initial employee that was doing data gathering, he was gathering intel for the outside. So, so we have cases where we have employees who are these trusted in insiders, they’re very careful not to expose themselves right in stealing that information, but they’re still providing intel to the outside. And that’s a pretty scary case to work on. Because how do you detect that type of behavior. So it’s really got to look out for almost everything right when it comes to data protection, right.

LO: That’s a really interesting case. Because, in some cases, unfortunately, it may be easier for people to actually get hired by a company and perform social engineering as opposed to just trying to find out different other methods of doing that. So that’s just a really interesting case. What were some of like the “breadcrumbs” that you had found in that situation that may have given away from the get go that that employee was trying to do that?

TB: Yeah, so I mean, employee was essentially snooping in files and opening files and mounting drives where you shouldn’t have been doing right. So there was a lot of activity where a lot of these trade secrets were residing, he would go in there, he would map everything out, and you even compress some of the files to it was like he was staging the data for them. Because when the threat actors came in, they went directly for those compressed files, and there’s no way without getting that intel, would they know that those compress files resided there, right? Because these were a lot of times, more recent compressed files that were just ripe for the taking essentially so it was almost like he prepped it for them to come in. So unfortunately for the bad guys we saw this type of behavior because we at Digital Guardian, the company that I currently work for, was deployed and we were mapping all that behavior, so we track and log all activity as it relates to file movement. Because we had that evidence and information, we’re able to reconstruct that whole attack and be able to pinpoint it directly on that individual. So, we look for all that type of activity, but you would not be able to proactively figure that out ahead of time. There was no way to determine that the individual was doing something like that, but at least allowed us to go back in time to determine Okay, he was the root cause associated with that.

LO: Right. No, that’s, that’s really interesting. And did that employee leave right before the phishing emails were sent? Or was he or she still there at that time?

TB: Yeah, they were still there at the company, and they’re the ones who clicked on the link would, which also allowed the threat actors to come in. I mean, he played dumb and said, I didn’t know what I was clicking on. But ultimately, he opened the door for them. Right. And just just that alone, of course, is enough. But a lot of times when we have these phishing cases, we might not think twice, we just think, Okay, well, this individual or employee needs to have some more security awareness training. But sometimes you might also want to question was there an intention behind clicking on the link? I mean, I know that’s not something common that we might want to ask ourselves. But I can tell you after that case, every single time we’ve had phishing related cases or incidents, we always look at that employee or individual who also clicked on the link was it mistaken? Or was there some possible nefarious intentions as well behind there?

LO: Right. Wow, that’s really interesting, though. So I’m curious too, looking at the future of insider threats, is there anything that companies need to be aware of any kind of emerging trends you’re seeing or just new challenges that may pop up in the future?

TB:  Yeah, I mean, to be honest, Lindsey, I mean, it’s the same stuff, right. I mean, the, for the last 15 years, we’ve seen the same type of activity with just employees leaving companies, taking data, and we see that because it works a lot of times, I mean they get away with it a lot of times because there aren’t safeguards or mechanisms in place to at least monitor for that behavior. There isn’t an exit interview process for, once the employee leaves, do you want to make sure that he hasn’t taken any data, a lot of companies don’t have that in place. I mean, you’re going to have, certain fringe cases once in a while where an employee might jam some data into an image file and leverage, stegonography as a tactic. We’ve actually seen that recently. But that’s going to be pretty rare, right, in terms of of actually data theft, because it requires a technical competency, I think beyond maybe what most people would kind of dive into. So I just think, the general data theft type story case is going to continue to happen, because they’re able to a lot of times get away with it, right. So I think making sure that you have a program in place to monitor that behavior and prevent that in the first place is going to be critical for companies going forward.

LO: Well, that’s definitely both an opportunity for companies to kind of try to prevent these threats more but also on a pessimistic side, it’s kind of sad that it’s still ongoing if there are measures in place that you can take to prevent these types of threats, for sure. And so, hopefully that will change in the future. And, Tim, thank you so much for joining the Threatpost podcast today.

TB: Absolutely. Thanks again for having me, Lindsey.

LO: Great. Thank you and catch us next week on the Threatpost podcast.

Suggested articles