InfoSec Insider

Enterprise Data Security: It’s Time to Flip the Established Approach

data leak Autoclerk

Companies should forget about auditing where data resides and who has access to it.

There’s an old saying when it comes to big undertakings: Don’t boil the ocean. Well, there’s hardly any bigger project in information security than trying to protect corporate data. But the reality is that too many organizations today are, in fact, “boiling the ocean” when it comes to their data-security program. In fact, they have their entire data-security approach backward – especially when it comes to managing data risk within today’s highly collaborative and remote workforce.

That’s a bold statement, I know, so give me an opportunity to explain what I mean. When most organizations take steps to protect their data, they follow (or, more accurately, attempt to follow) the typical practices. They start with trying to identify all of the sensitive data they have in their organizations – all of the data that exists on their internal network file shares, on endpoints, on removable media and in all of their cloud services. Then, they focus on how important the data is, i.e., the classifications of the information. Is the data confidential? Intellectual property? Important? The next step is determining who has access to the organization’s data. Finally, they seek to control or block when data leaves the organization.

This has been the accepted strategy across the security profession, and, frankly, there is a lot wrong with this model. The honest truth is it’s just not working because there is just too much data to successfully identify within the typical enterprise. According to the market research firm IDC, 80 percent of enterprise data will be unstructured by 2025. Let me tell you from experience, unless it is data that is obviously classified as personal health information, or card-payment information, then it is difficult, near impossible, for organizations (except maybe the military) to properly classify and rank their data, much less rely on employees to follow a prescribed classification scheme. They essentially rate everything as classified.

Consider our experience within Code42. We have about 500 employees, and, over the last 90 days, have logged a little over two-billion file events within our environment. This includes file edits, a file moved and similar activities. That’s not including events that are continuously occurring within every endpoint. When you consider that much data activity, it becomes clear how challenging it is to ask security professionals to understand who is accessing all of that data and where all of that data is flowing.

Imagine this traditional data-protection funnel as a data breach kill chain: What data do we have? What is the classification of our data? Who has access? What data left the organization and where did it go? As we’ve seen, this is a near-insurmountable challenge, unless an organization throws an enormous amount of resources at the problem and executes flawlessly. We know that’s not going to happen.

What’s the solution, then?

To start, we all need to acknowledge a few basic truths about corporate data:

  • All data is valuable, not just the data that we classify.
  • Every user – not just privileged users – have access to data.
  • Collaboration is constant, therefore, blocking will not work.

Given the above, organizations need to flip their approach to data security upside down and first tackle the data entering and leaving the organization. It is a much smaller subset of the total amount of data in an organization, and a vast improvement compared with having to scour more than two billion files at the top of the traditional data funnel. With the inverted funnel, we literally start with a much smaller set of files on any given day and can see if they are files that need more attention.

I’m sure there will be naysayers. But what the industry has been doing, including when it comes to insider risk, hasn’t been working. This approach is a lot more straightforward than having to apply an antiquated data management strategy using a constellation of technologies that must be applied near perfectly. Just look at data breaches year after year: In 2019, for instance, there were an estimated 3,950 data breaches, up from 2,013 in 2018, according to Verizon’s Data Breach Industry Report (DBIR).

Clearly, there are too many data breaches affecting too many organizations and their customers, and there is too much valuable intellectual property leaving organizations and entering others. This is happening because enterprises are looking at the wrong side of the funnel – and they can’t answer some of the most basic questions about their data as a result.

Bottom line: When it comes to protecting corporate data, organizations don’t have to boil the ocean. In fact, they shouldn’t even try. They need to focus on a much smaller data stream – the stream where their data is actually flowing.

Rob Juncker is CTO at Code42.

Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting our microsite.

Suggested articles

Conducting Modern Insider Risk Investigations

Insider Risk Management requires a different approach than to those from external threats. IRM is unique from other domains of security in that the data sources which serve as inputs are as often people as they are tools. Shifting the analyst‘s mindset when handling risks presented by insiders requires us to move through the stages of inquiry, investigation, and determining outcomes.