The Enemy Within: How Insider Threats Are Changing

Insider-threat security experts unravel the new normal during this time of remote working, and explain how to protect sensitive data from this escalating risk.

Insider threats are ramping up – with new kinds of concerns in this category beginning to emerge.

This is happening against a heady backdrop: Makeshift home offices, a cavalcade of new distractions and a tectonic shift to the cloud have recently collided to create an entirely new world for enterprise security. It’s a world where companies are simultaneously trying to make all their information available to a diffuse remote workforce, while locking down their most sensitive information. Meanwhile, there’s an expanding roster of potential bad actors ready to take advantage of the confusion.

On the insider-threats front, when it comes to knowing precisely what valuable information your company has in its possession, privileged IT users and administrators are the most lethal. Insider threats like these can get easily overlooked, with catastrophic consequences to the entire business, from IT and marketing to customer service.

Ratcheting up the risk is the growing reliance on an independent-contractor workforce, coupled with dire predictions of upcoming furloughs and layoffs — symptoms of a pandemic-weakened economy.

Besides the motives of malice and financial gain, sometimes-innocent, accidental disclosures happen: That’s especially true now, when thanks to stay-at-home-orders, the lines between work, home, professional, family and school are more blurred than ever.

The way forward is a system that can monitor data in real time and even predict threats before they happen, according to Gurucul CEO Saryu Nayyar and COO Craig Cooper, who both recently participated in a Threatpost editorial webinar devoted to how businesses can protect against insider threats.

In this webinar replay, they are joined by Threatpost senior editors Tara Seals and Lindsey O’Donnell for a discussion about how the current climate is driving a rise in insider threats, and how businesses of all sizes can implement a system that protects information before it’s compromised.

Nayyar proposes an approach that fuses a meticulous attention to permissions and information access supported by big data analytics and something she calls “sentiment analysis” that analyzes behaviors for brewing insider risk.

Cooper offers a raft of independent survey data on business attitudes on insider threats as well as attack data; and follows with insights into best practices for addressing the risk, including examples of how one hospital group in Minneapolis, Minn. was able to come up with a game plan to secure Tom Brady’s medical records from the tabloids during the ramp-up to the 2018 Super Bowl.

Finally, webinar attendees were given a chance to weigh in on their own mitigation strategies, with 11 percent responding their business was “extremely vulnerable” to insider threats.

Please find the YouTube video of the webinar below. A lightly edited transcript follows.

Tara Seals: Hello, everyone, and thank you for attending today’s Threatpost webinar, entitled The Enemy: Within How Insider Threats Are Changing.

I’m Tara Seals senior editor at Threatpost and I’m going to be your moderator today. I’m also joined by my colleague, Lindsey O’Donnell, who is also going to be a moderator for this event.

I’m excited to welcome our panelists. But before I do that, I do have a few housekeeping notes. First of all, after our speaker presentation, we’re going to have a roundtable discussion and take questions from the audience. I’d like to remind everybody that they can submit questions at any time during the webinar. You can see the Control Panel widget on the right-hand side of your screen. If you look, there’s an option for questions, you can just click on that, it opens up a window, and you can just type your queries right in there, and we’ll be able to see them.

Also, I’d like to note that the webinar is being recorded.

We are going to be sending out a link where you can listen on demand and you can share that link with your colleagues, revisit later, or whatever you would like to do with it.

And on that note, let’s talk about today’s speakers. We’re going to hear from Saryu Nayyar, who is CEO at Gurucul, as well as Craig Cooper, who is a senior vice president of customer success and COO at Gurucul.

I’m excited about these two. They’re going to tag team a presentation today and provide kind of a deep dive into the current state of insider threats, including the risks and challenges during this pandemic with everybody working from home. They will also delve into some ideas on what we can do about it.

So, let’s take a quick look at the agenda here. They are going to talk about how things are changing. They’re also going to go into some interesting survey data, findings, which I found to be fascinating; behavioral analysis; and some other best practices; and then we’re going to move on to a roundtable discussion.

You can see there are four topic areas and we’re just going to have an open discussion and discuss what’s going on out there.

Before I pass it over to you and Craig to kick off the presentation, I’m going to post a poll right now for attendees.

Poll Question: Has the insider risk increased for your organization during the pandemic?

I’ll just give you guys a few minutes to answer that.

Poll Question: We also have a second poll that we’re curious for you all to answer. How vulnerable is your organization to insider threats?

And again, so getting some responses, so just give us a second here.

Poll Question: OK, great, and then we have one more, which is: Do you have a formal insider threat program in place?

All right, so I collected the responses, and after the conclusion of the presentation, I will be back on the line share those results before we head into our roundtable discussion.

So, with that, I am going to pass the ball over to our presenters today. Welcome, guys. I’m so glad that you’re here.

Saryu Nayyar: Thank you, Tara, really appreciate the introduction.

We at Gurucul have spent the last decade helping organizations globally to deter, predict, detect, and remediate insider risk. I really appreciate the opportunity to share insights on the current paradigm of insider risk.

Work from home employees, staff reduction, reduced hours, or furloughed insiders; the unfortunate reality is that insider risk is certainly on the rise.

With that, Craig, do you want to walk through some of our survey results about how the paradigm is looking right now?

Craig Cooper: Thank you very much. I appreciate that, Tara.

Before we get started, what I wanted to do today is talk about what an insider threat really is. And the reason I wanted to do that is that in my experience, working with a lot of different organizations, people define this differently. And even different areas within the same business will look at insiders differently.

When you talk to someone in the information security department, maybe in a threat area, or an insider threat department, even in some organizations, there are a lot of different actors. They’re listed here on the screen: employees, contractors, temporary employees, and whatnot.

The other thing that’s interesting is that they’ll also add other things in there, things like compromised, host compromised accounts. They look at those insiders too because they’re external people — or external entities if you will — that are acting as an insider.  The tactics you need to use in order to identify those insider threats are

the same, because you’re looking at someone that is acting as an insider, which is kind of interesting.

But typically, we look at insiders as being employees, contractors, business partners, and potentially compromised accounts, and/or hosts as well.

The question is often: What might they be looking at? And often, when you are talking about insider threats on the physical side, it could be someone targeting a specific person. That’s not very comfortable to think about, but that’s obviously something that could happen. This happens with workplace violence and those types of things.

But more often than not, when we’re talking information security, they’re looking to identify information, customer records, maybe those top-secret secrets. Those types of things. Then people will ask, “why would someone do that? All of our employees are good!”

There are a lot of different reasons. People are generally financially motivated. Maybe they have a rift to settle with someone within the organization. It could be someone from the outside, trying to come in and take trade secrets.

Probably one of the biggest insider threats is accidental disclosure. That’s often done through phishing attacks, which we’ll talk about a little bit later.

It also could be loss or just accidental disposal of information. People throwing something in a dumpster and having it found by someone else.

There are lots of different and potential impacts here: everything from employees being harmed all the way through to the organization being harmed.  So, when we look at an insider threat, we’ll be talking primarily about people on the inside, and potentially will also identify some cases where we should be talking about outsiders acting as insiders as well.

So we also wanted to go through and talk a little bit about survey results here. This is the Cybersecurity Insiders Survey and we found some interesting information. This is not unlike the poll that we just took and will review at the end of the presentation today.

We asked: How vulnerable is your organization insider threats? And what we found interesting here is that, if you look at this, the vast majority of end users or cybersecurity professionals out there feel that their organization — almost 70 percent — feel that their organization is vulnerable to insider attacks. And I would say that, in my experience, with the pandemic here, we’re actually seeing that on the rise because a lot of the physical controls that we’ve had in place, like business processes and things, those resources are now working from home too. Those controls are no longer effective. So we’re seeing this actually on the rise with the pandemic because a lot of the controls are in place, again, are obsolete, they don’t work anymore.

What types of insiders posed the biggest security? And this is not probably a surprise to a lot of people, that privileged IT users and administrators are looked at, as the as the biggest threat to organizations. And I think what is interesting here, is that everyone’s favorite insider threat story, the Snowden case, is what kind of drove the whole idea of an insider. This is a was a case where you had someone misusing their privileges and basically divulging nation secrets based on that.

Have insider attacks become more or less frequent? We’re seeing insider attacks on the rise. This is not a surprise at all. Hopefully people are doing work in progress, or they have a program in place, because insider attacks are becoming more and more frequent and more, and more prevalent out there.

Another interesting are we polled is the organization’s ability to monitor and protect against insider threats. And, 58 percent of those we polled consider their monitoring to be somewhat effective or not effective at all. Which is interesting too.

And we’ll talk a little bit about some approaches here. A lot of organizations are starting to move away from traditional in-depth defense controls for looking at insiders and new looking more at behavior analytics which is kind of in Gurucul’s wheelhouse.

But again, a lot of organizations feel that their current ability to monitor it is tough. The other interesting thing on this is that, when you’re working with people in Europe, and other countries, even, some states in the U.S., there are some handcuffs that get put on as far as user monitoring. And so we can talk a little bit more about that if someone wants to go down that road.

Insider threat management focus. You may say, well, geez, there’s more than 100 percent here if you add all these up. That’s because people can select more than one. But, again, most folks are working on the detection and deterrence piece. And then you have the analysis and post breach.

Where we’d like to get more organizations to is real-time detection and remediation on the spot. That’s what most organizations are trying to drive to, and we’ll talk about a process that we use with our customers in order to make that happen as well.

How quickly can you remediate insider threats? And this kind of touches on what I was talking about here, 25 percent in real time.  This is what we want to do. We want to identify an insider threat, and  we want to be able to stop the exfiltration or stop the damage before it happens, or, or at a minimum, as it’s happening, so you can minimize the impact to your organization. And really before data exfiltration. The threat may have grabbed some data. And if you can stop them before they actually get it out, that’s a good thing.

But again, most organizations, they detect it and then try to figure out what happened after the proverbial horse is out of the barn, so to speak.

So, couple more, and then we’re done here: Impact to the insider breach.  I think this is kinda interesting, too. And as I talked about on the first slide, you need to think about a breach in terms of reach and costs, potentially millions and millions of dollars.

Think about what happened to Target when they’re infamous breach happened. They literally lost the whole shopping season revenue, which hurt their company, hurt their stock price and really hurt their brand.

And you know all of the others out there Equifax, Starwood and all of all of these different breaches and even on the medical side with medical records. And things that have happened, really hurt these organizations from, from the outside looking in, you know, the brand damage.

But there’s also other things that happen.

Inside, so when your organization is breached, in a lot of cases you must put the brakes on conducting business. Literally, your business comes to a halt and in a lot of cases, you need to try to figure out what’s happening. Your business partners want to keep moving forward. But in a lot of cases, you end up with customers that want answers now. And you end up having to scramble your team from doing what they normally do to answering those questions or diving in and doing investigations and trying to figure things out. Then your customer service area is answering questions that were not planned for, so call centers get overloaded. It’s a mess, and it’s not a pleasant for you, for your customers, for your employees or anybody. So really, again, if you can avoid that insider breach, or any breach for that matter, that’s super, super important to do.

The last one: House cloud migration, impacted insider threat detection. Respondents are saying it either hasn’t changed or is somewhat harder. Some are saying it’s significantly harder. And I think this is interesting. A lot of cloud service providers, while they provide some logging mechanisms and SaaS applications, we really don’t have the visibility into those systems that we do in our enterprise systems. And what’s really interesting with a pandemic now, we’re seeing more and more organizations moving towards cloud solutions.

We’re seeing a lot of that being accelerated because the cloud provides flexibility for your business to be conducted globally, you know, without having to worry about infrastructure and whatnot. It’s all accessible. And it makes it makes it much easier to conduct business in a lot of cases. So businesses are generally shifting to the cloud. Most of them have cloud strategies now, and so you’re going to have to figure it out and we’ll talk a little bit more about that as well here today.

Saryu Nayyar:  So, you know as we talk about the pandemic, we all know that things changed on most of us overnight. This wasn’t a planned activity.

Most organizations didn’t have a plan for a pandemic like this. So, you know, overnight, we saw changes in how corporate devices and personal devices are now being used to access corporate data.

How do we control remote workers? Insiders are accessing your data from unsecured networks, Some companies are coming up with controls on that very quickly. And overall, the phishing attacks have increased significantly.

Within the first week there were 4,283 malicious domains created for covert activity and phishing attacks increased by 11.6 percent just in the first week of March.

We’re also seeing a lot of changes in the third-party controls. When most of us think about insider threats, organizations think about, of course, their employees, right? Because they have keys to the kingdom. Companies then later realize that all the third-party partners or vendors they work with, and also companies that they outsource a lot of work to — some companies the entire IT department is outsourced — are also potential insider threats.

Now, with folks internationally in remote locations and not coming to offices, typical the clean room controls are not effective anymore. They still have access to your data, and you cannot take that away, because they’re running some of your critical business applications, which now need more support with your workforce. And your consumers want to be more digital in the transactions they’re doing, right? So, there’s a whole paradigm shift there.

And then, of course, the sentiment, right?

The sentiment part is especially important for insider risk. People are being furloughed, layoffs are happening, and wages are being reduced. When you think about insider risk, you want to think about sentiment analysis, because that plays a very important role. Why did somebody want to peek at that? Why did somebody want to steal the data, exfiltrate confidential information and what have you?

We are seeing a major paradigm shift. I would say we’ve been working almost round the clock since COVID-19 hit us, helping organizations figure out controls for this new normal. Everybody thought this is going to maybe last for a few weeks and now we know it’s here to stay. Many are saying that we’re safer at home until the end of this year, at a minimum. That’s where we are.

Insider risks include sabotage, espionage, fraud, competitive advantage, and are often carried out through abusing access and mishandling physical devices.

These threats can also result from employee carelessness or just plain policy violations. We see a lot of that happening with our customers that allow system access to malicious outsiders. That’s the big thing, and Craig talked about that in the beginning.

When you think about insider risk, don’t just be thinking about your employees, your contractors, your third party, or your visitors; be thinking about outsiders who are impersonating an insider. Those are things you want to detect as well.

Now, these activities typically persist over time, and they occur in all types of work environments, ranging from private companies to government agencies

We usually put them in three categories. Technically, we would say that in the account or host compromise, this most certainly leads to outsiders coming in, impersonating an insider and causing a lot of damage. This is a high-risk situation. I would say think about the Target breach that Craig talked about.

Next category would be accessed misuse and substance misuse. And data exfiltration is a separate category. Some real-life examples, like Edward Snowden,

accessed and misused nation secrets. So that’s the big one.

Then we recently heard about Tesla, where somebody accessed how to changes in the production line, which caused brand damage, as well as operational, efficiency damage. Alright, So, these are the categories, we kind of grouped them.

Insider risk is something I’m personally very passionate about when we talk to organizations. And over the last decade, I have spoken to government agencies that conducts very deep research on insider threat.

I would say many large Fortune 100 companies globally, large organizations, and even small companies as well, need to think about insider risk, which is to predict risk using activity behaviors. So, using activity or logs information to predict this reduces the attack surface by reducing the identities or excessive access.

So, I’ll use an analogy here. So, let’s say you’re going on a vacation and you share your home keys with a group of trusted friends, right. You come back, and you haven’t taken the keys back, you go back to your daily life.

Now you have a situation where somebody you know has stolen some of your stuff. So, you decide you’re going to monitor who’s using the key. I think the more important thing here is to go back and assess who shouldn’t have the keys.

And when you think about an organization, you want to think about who has the keys to the kingdom in my company, who has privileged access, who should really have that access, or should we consider taking that away periodically, cleaning it up, and get down to providing information on a “need to know” basis. By doing that, you reduce the attack surface, so you’re not just monitoring it after the fact and putting controls to detect if somebody’s trying to misuse. You’re actually controlling who has the keys to the kingdom.

Now, flip the situation and think about, if it was an outsider, and then accounts, compromised.

Lastly, now you’ve handed out keys to the kingdom to your insiders the risk is exponentially higher because they know who to reach the highest level of access. Those are the folks they’re trying to impersonate and compromise their account. And from there, they can cause a lot of damage.

Now, the core thought here is when you think about setting up your programs, or if you’re already in their journey, and you’re thinking about maturity of operationalizing your programs, don’t forget to identity threat plain. And then another key thing is one leading indicator of insider risk is behavior, right? That’s the key indicator of insider risk. And that’s very important.

So, you want to use techniques where you are doing behavior profiling when you’re figuring out what is normal and not normal, not only for each insider, but also dynamically figuring out what would a credential peer group B, and what is their normal? So, you can really cut down the false positives, and quickly get to a point to understand, you know, where does the risk lie in your organization?

So, we want to leave you guys behind with two sides of the equation insider risk equation: X, axis, and activity, both are equally important to really mitigate the insider threats.

Here is a framework that we worked with many, many large, global companies over the last decade. The theme here is, how do we get to a mature running insider threat program? So usually when you think about it, you want to think on the left side, you want to think about all kinds of data, right? Don’t restrict yourself. You don’t want to just be thinking about your infrastructure logs system. You want to be thinking about your business applications, you want to be thinking about access, right? You want to be thinking about a bigger brain that’s able to take all of this information and come back with a very different inside out view of what all of this access and activity looks like.

Run analytics on it because it is going to need millions of combinations to be looked into. Once again, think about identity, what access they have. Think about the accounts. You know, there can be multiple accounts associated with a person. You want to holistically understand what they’re doing across the different accounts. Take over the activity logs to see what they’re actually doing with that access and access entitlements. What do they really, deep down, have access to? What keys have you given them?

Bring it all together and pick a system that has big data, which is independent, has data democracy, and doesn’t lock you down into its own proprietary setup. So, take that choice of big data, and run the behavior analytics on it. It’s baselines, all of the insiders, their normal, their peer groups normal, and has algorithms built-in to give you one single risk score that you can absorb.

So, let’s pick on a scale of 1 to 100, what is very high risk and critical, so you can take prescriptive actions, right? Then, and the idea is to automate this process and automate the administration, decisioning and the governance.

Many of our customers affectionately call this program model-driven security, which is data science backed automated security controls for insider threat and other control automation, as well.

So, I’m going to spend a few minutes here talking about the best practices.

Now, these are best practices that we’ve learned working with many, many companies globally of all sizes, and in all different verticals. So, feel free to ask questions if you want to reach out to us. Would be happy to help.

So, the first tip of the program is to initiate the program, Right? And that’s a very critical step. You want to make sure that you identify, who the stakeholders of this program are. And you want to make sure that you include all the key stakeholders, like HR. We see them being left behind many times.

And they are a critical part of an insider risk program. And how to operationalize and get to a successful program. You might want to consider bringing them in later if that’s what the culture of the organization needs. But setting that upfront is going to give you a lot more success keeping and understanding the culture of your organization.

You know, spectrum financial companies accept that insider risk is a problem. They have compliance regulations. They know they have to monitor this. The company, as a whole, understands this and that it’s serious. It’s not a trust issue. It’s a security control.

On the other side of the house, we work with high-tech companies. We’re talking about insider risk is a trust issue. It’s about not trusting your employees, not trusting your contractors or other folks you do business with from a third-party perspective. So that’s a spectrum. Understand the culture and that the culture is important.

Every organization has an insider risk issue, the key is to understand, how are you going to communicate.

How are you going to manage this? You don’t want to be Big Brother. So that communication part, super important, and we have templates. We have set up a whole program, which really works. So if anybody needs help, reach out.

Next, important thing is the product selection. Please do not think that traditional approaches can solve insider threat. It’s a whole mindset change. It’s a different way of looking at the problem. You’re looking at it from an inside-out perspective. You’re looking at it from a context perspective. This is not about transactions. There are many best practices on how you would operationalize. But even from a product selection perspective, it’s very important that you really look into some key things, and don’t just don’t just look at your current platform and say I have a some legacies, and maybe I’m just going to use it for insider risk.

It does not work. It’s been tried tested by many, many organizations who failed. If you wanted to reference, happy to provide that.

Key things to think about, from a product selection perspective is a platform that can really give you a unified security and risk perspective. Unified is important because you want to bring in different parts and pieces. You don’t want to be going to a different platform to research on top of it. You don’t want siloed analytics. You want one place you consolidate all this data and run analytics and get actionable results. When I talk about actionable results, I’m talking about prioritizing risk. So you exactly know that if this is the risk and it is a behavior model that got triggered, your insider risk management team knows what to do from there.

More importantly, as you mature, you can automate these controls, which is the end state we’re going after. Next is to define your threat indicators. This is very, very important.

You know, over the last 10 years, I would say, we’ve learned a lot about this. We have best practices on our threat indicators that give you the most value. And I would say, HR attributes also play a key role in that. It’s important to have that partnership upfront, and, of course, all of the controls so that nobody gets to see any confidential information or have to be built into the platform.

Then next is the linking. We’ve talked about this early on, as well.

Building that context together across various systems, looking at their access activity, any alerts, and building that holistic view and linking it together is important. Not using correlation rules. This doesn’t give you the highest efficacy, because they’re very basic. You want to use, and platforms should have, this capability. You want to see who has link analysis algorithms built-in to give you the most efficacy and linking all this data and building that context.

Next would be the baseline. You want to a base behavior, not just on insiders. You also want to look at their peer groups and you look at their machine. You want to look at other machines in that peer group, anyone, develop baseline behaviors and see where the anomalies are. And then it doesn’t stop. It’s not just about anomalous behavior. It’s about risky, anomalous behavior; that’s what we’re going after.

The next point of maturity comes in the monitoring and responding to this.

This is a key thing. We’ve seen many companies struggle with this. Everybody goes into the soft mindset to solve this problem. This is a this is a lifestyle change from that process. It’s a different way of looking at it. You want to set the right response mechanism, build the right playbooks, have the right governance committee set up so you operationalize, right? To have an effective working insider threat program, you need to have all of these working and continuously being able to review the results and provide feedback.

The algorithms can tune themselves, because they’re self-learning, and you get higher efficacy results. The good news is a good platform should be giving you very few alerts every day.

If you’re talking about a company of, let’s say 10,000 employees and 10,000 insiders, I would say you would get about hundred alerts in a month. That’s very few compared to the other signature-based platforms.

Craig Cooper: So a lot of different findings, here. I’m not going to read all of these or go through all of these through. I think one of the big ones that, that hit home for me was it in 2018.

It actually wasn’t even a, finding per se, but in 2018, a customer of ours, a medical firm, hospital system, in Minneapolis, MN purchased our system to look at and look for insider threats.

The reason they did it is because Minneapolis, was hosting the Super Bowl and the whole purpose of this was to monitor VIP, either current and or former NFL players, medical records, to make sure that TMZ or some other news firm didn’t get information, either before or during, or even after the fact. The Super Bowl players and former NFL players medical records, you can imagine, from an insider perspective, how easy it would be for a doctor or a nurse that’s not even affiliated with a nearby hospital, but one that may be off in the distance, and accessing all of the medical records in the system and be able to go up and say, lookup Tom Brady’s information or someone else’s.

So, this is very, very cool use case. And we did not detect anything. There weren’t any breaches in that one. So, that was actually a really happy outcome. It was more of a monitoring thing than any actual findings.

Another interesting one that we have found working with internet retailer, their customer service reps going out and actually taking card charges and applying it to their own personal accounts. And I thought that was really interesting, It’s fraud but it’s also inside or they wouldn’t have been able to do it, had they not had access to the inside systems.

We detected another one that people often don’t think about. Terminate employee accounts, or data from SaaS applications. we had a customer, actually multiple customers, this happened to on more than one occasion.

Most traditional controls don’t collect information from HR. We don’t have indicators from HR from the identity system saying, “hey, this person was supposed to be removed.”

Furthermore, a lot of times our SaaS applications are provisioned through different processes, often manually. And in some cases, run out of the business rather than through IT. And, we often find Salesforce access to left open to former employees. And they just go in, they download the customer list, they download revenue reports, it’s a crazy place to go.

Using an analytic system like this, if you collect contexts from your HR and other other sources have the ability to actually identify and put the pieces together to say, “hey, this person is no longer with the organization, why is their account, being used at all?” Being able to do that, we’ve also identified where the account was shut off and turned back on.

Data was exfiltrated and then turned off again. And this was a case where someone was actually leveraging privileged access that they had, went in and enabled an old employee account, did some things, disabled the account and the identity system never ever found it, because the identity system reconciles accounts on in this case on a 24 hour basis. So, the state of the account was, what it was. Or what, it was expected to be between cycles?

All of this activity happens so when they re-ran, the reconciliation was already where it should be. It would have gone undetected had they not been looking for behaviors like this.

So, there’s a lot of interesting customer findings out there, and we’d be happy to talk more about that as well, if, if you want to do that during the roundtable discussion. So, with that, I think, what we’ll do is, I’ll turn it back over, and we can open it up to questions and roundtable.

Tara Seals: Again, I would just like to remind the audience that there’s still time to submit your questions using the control panel on the right of your screen. We’re going to try to get to as many of those as possible. And now, in addition to Craig, my colleague Lindsey O’Donnell is going to join us for our roundtable today. So welcome, Lindsey.

Before we get into our discussion, I wanted to go over the poll results.

So, the first thing that we had asked was: How vulnerable is your organization?

Now, 11 percent of our attendees said that they felt extremely vulnerable. And that compares to 5 percent in that broader pool that you guys talked about earlier, which is interesting. Of course, maybe the reason you’re attending a is because you’re concerned, right? Maybe a little bit of a skewed sample here, but I thought that was interesting, more than double.

Also, 82 percent said that they feel vulnerable or extremely vulnerable and another 7 percent said they don’t feel vulnerable.

So that’s an interesting result, too.

The second poll question was: Has the insider risk increase for your organization during the pandemic? Unsurprisingly, the majority, 55 percent, have said that yes, it has increased. Forty-three percent, however, said, no change. And then the remaining 2 percent said that they have found their insider threat danger to be reduced.

OK, Interesting, right? Yes.

And then the final poll question says: Do you have a former insider threat program?

40 percent of our attendees said no.

Thirty-two percent said yes, and 27 percent said that they have something in progress, which is interesting.

Now I think we can probably start our roundtable discussion. I know that we had sketched out about four different buckets to talk about here today. And I think you know maybe we could start off talking about those inadvertent employees, the social aspect of this.

You know, a lot of companies have invested in employee training and awareness, but, you know, how do those challenges and good security hygiene, how do they become more magnified when we have so many employees working from home for the first time outside that traditional corporate perimeter?

Craig Cooper: So, what one of the interesting things is, and again, I looked at this through the eyes of a CISO as well, one of the interesting things that I find with the whole pandemic and people working from home now, is that people are kind of put in a precarious position.

You know, where are they going to work. And they’ve got a lot of cases, kids at home that are trying to do schoolwork or trying not to do schoolwork, depending. And you’ve got, you know, often espoused that is working at home as well. And, so, you have a lot of, I’ll just say, action happening in the home today that a lot of people may or may not be prepared for.

I’ve been on conference calls with people that are working out of their garages because their spouse has got the office. A lot of people are working on the dining room table or the kitchen or bar or wherever. And when you think about risks to your organization, you know, think about the, I’ll call it, the drive-bys that happened in your home.

So, if you’re, if you’re a medical firm, for instance, and you’ve got patient records up and you’re working on medical records, maybe someone’s just keying in medical procedures and things that happen. I don’t know how much of that happens anymore with EMR systems, but I’m certain that that there’s some of that, and you can have people just looking up records and billing and those types of things that processing insurance claims and those sorts of things. There’s a lot of data on the table there or out in the open, if you will.

When you’re at work, you’ve got a cubicle and you’ve got like-minded people around, but, you know, who’s to say that, that you don’t have a teenager that’s looking over your shoulder, or a spouse is looking over your shoulder.

And by the letter of the law, it’s a breach to have someone else looking at those things, even if it’s, oh, it’s just a family member. Imagine if you were working on a chip design for 5G, 6G, cellular.

Now, it’s something new and cool that no one, no one has even dreamed of yet. And you have this trade secret. And it gets exposed because someone’s walking by, or even the idea that your company is looking to, to build this thing. Maybe a neighbor kid walks by, and they have a presentation that talks about 6G and they mention it to their parents down the street. Who knows how that data could get out?

The other interesting thing is the mix of company data on personal devices and personal computing on company devices and the possibility for inadvertently sharing information there. People often download stuff to their desktops, and then they turn around and may have their kid or their spouse use that computer to go, you know, go to fulfill an Amazon order or something like that. And who knows who has access to that data on their desktop. So, now, what do you do about these things?

And awareness is one of those things. Training is one of those things that you should do to keep reinforcing the message and hope people take it. But, you know, kind of like leading a horse to water, you hope that they consume as much as they can. But if they choose not to, it becomes a performance issue.

So, I would just say that reinforcing your messages, reminding employees that they shouldn’t be using their work devices for home computing and that they should be cognizant of their surroundings when they’re working with sensitive information.

Those are important things to do.

Lindsey O’Donnell: I completely agree with you. I think you kind of did a really good job describing some of the kind of inadvertent issues that could crop up from work from home right now in terms of insider threats.

One thing that I’ve noticed on social media are employees posting their work from home and pictures of their remote workplace. And I think that there could potentially be sensitive data on the monitor that people could see. Potentially, that might be a little password on a sticky note next to the computer.

So, I think just even the smallest things like posting a picture of your remote workplace on Twitter can really show that these simple mistakes really go far in a remote workplace.

And I think you also made a really good point about how a lot of employees are juggling crying babies and barking dogs in the background. I’ve certainly come across many of those during my work calls, and I think that under these conditions mistakes can happen. For instance, you could have something as simple as sending an e-mail, that it’s containing critical internal company data to the wrong e-mail address.

Like you mentioned, there’s also a lack of guidelines around how to deal with privacy in this new workplace.

I was writing the other day about a recent survey from IBM security that found more than half of those surveyed had yet to be given any sort of new security policies on how they can securely work from home. So, I do think a lot of employees aren’t aware of correct security and, you know, workarounds here, in order to be the most secure and private that they can be.

Saryu Nayyar: Yep. Front a security practitioner perspective: What should we be doing? So, first, is security a business enabler? It should only be visible when needed.  So, that’s an underlying fee that we always talk about second but is very important. One thing that will not change as much, is the behavior.

Behavior is especially important. If somebody’s job is to be a data scientist and research, that is what they should continue to do on your corporate systems when you look at their access inactivity. And that’s, that is what the behavior should continue to be. Now, all of a sudden, their behavior is changing. That is the leading indicator of risk. But how do you fix that? When you do behavior analytics you understand what the behaviors are.

Another important thing, Lindsey talked about, social media, and I talked earlier about sentiment analysis.

When you look at your decision on technologies are building this up, Reach out to us. Let’s talk about how sentiment analysis plays a very big role in how you connect to social media, and mature platforms have the capability to pull in that information using link analysis as well.

As this new normal sets in, understand that it is more important than ever for you to build that user entity based linking across your different data systems. You might have moved to cloud much faster than you thought you would.

So, get all of that data together in a home that can quickly give you that value. And then understand the capabilities to get to proactive enforcement. And this goes back to what Lindsey was saying. I want to make sure that no e-mails are going out with no attachments that shouldn’t be sent out. So, we know somebody’s risky, you can do a DLP enforcement and make sure no e-mails go out. Try to get a piece of building a mature platform or a mature insider threat capability, even with all of this going on and reduce the insider risk.

Tara Seals: We’re unfortunately, running out of time. This is such a great topic area that I feel like we could go another hour on this. But we did have a couple of audience questions that I wanted to get to before we had to wrap it up. And since we only have a couple of minutes left here, one is a follow up on the sentiment analysis idea.

What are some of the processes that go into that? Is it just a question of, you know, sort of monitoring social media? How do you even start to begin to architect sentiment analysis or train your algorithms to recognize that? And also, to follow up on that a little bit, how do you balance that with the privacy?

Saryu Nayyar: Absolutely. Great question. So, first, let’s say, on sentiment analysis, there are many algorithms that can be used to take that data. To start with, we’ve seen basic things.

Like, let’s say, e-mail logs. Have the subject, have the content, looking through that, and looking for certain threat indicators. That’s where the maturity of an insider risk platform comes into play. How many algorithms or how much? How many spread indicators are there? What is the depth of understanding of that sentiment analysis? It’s not just about social media. It’s really about also using your own company information, which everybody knows is monitored for such things.  So, when I say the basic example of e-mails and you know, that would be a good one to start with. Second thing is, how do you balance this with privacy? Number one, from a privacy perspective, there is no new data be created

This data already exists, and it’s just being analyzed. Second, this isn’t only share good people with the right privileges who need to see this. It’s not like even a soccer team will have access to see the results of sentiment analysis. Only very few people have that ability. And once again, it goes back to the platform you’re selecting. Make sure you can do masking, make sure you can do masking based on rules like: Who has access to what I want to mass? What can they see in the platform?

And also building partnership with HR; privacy, legal, again, will be very important to implement this.

I’ve done a lot of research on this. I would love to have a separate session with anybody who’s interested. We can talk about sentiment analysis, which is for everybody, extremely critical key component of it, insider risk, program success.

Craig Cooper: If you’re using your corporate device for personal computing, you’re being monitored, right? And so, people need to be cognizant of that. And then, there are different ways of collecting data, being social media, e-mail or, or whatever. And some of them could be as simple as just going out onto Twitter and whatnot.

The last thing is that I would say is that if people are using social media, that they voluntarily understand that it’s, by nature, social, which means it’s public and, in most cases, if you know if you’re tweeting something or you’re putting something on LinkedIn, it’s there for the public to see. So it would be considered public information.

Saryu Nayyar: That’s an excellent point. People have an expectation of privacy and some of these situations where maybe, particularly when it comes to corporate e-mail and things like that. But then there’s also, of course, that question about, you know, building rich data profiles over time by using all of this information.

Now, a bigger picture of someone might not have been at the original intention. So that I think that’s why that question comes up a lot.

Tara Seals: Unfortunately, I think we’re going to have to leave it there. I would just like to let everyone know that if there are outstanding questions, or anybody wants to follow up on anything, please feel free to e-mail me.

That’s my e-mail address right there. And, you know, thank you for attending our Threatpost webinar. I’d like to thank our panelists, Saryu, Craig and Lindsey. Thank you so much for participating and lending your insights today. And thanks to all of our attendees for joining, and we hope to see you on our future Threatpost webinars.

Please check out the archive of previous Threatpost webinars, available here

Upcoming: BEC and enterprise email fraud is surging, but DMARC can help – if it’s done right. On July 15 at 2 p.m. ET, join Valimail Global Technical Director Steve Whittle and Threatpost for a FREE webinar, “DMARC: 7 Common Business Email Mistakes.” This technical “best practices” session will cover constructing, configuring, and managing email authentication protocols to ensure your organization is protected. Click here to register for this Threatpost webinar, sponsored by Valimail.

Suggested articles