InfoSec Insider

Enterprise Data Security: It’s Time to Flip the Established Approach

data leak Autoclerk

Companies should forget about auditing where data resides and who has access to it.

There’s an old saying when it comes to big undertakings: Don’t boil the ocean. Well, there’s hardly any bigger project in information security than trying to protect corporate data. But the reality is that too many organizations today are, in fact, “boiling the ocean” when it comes to their data-security program. In fact, they have their entire data-security approach backward – especially when it comes to managing data risk within today’s highly collaborative and remote workforce.

That’s a bold statement, I know, so give me an opportunity to explain what I mean. When most organizations take steps to protect their data, they follow (or, more accurately, attempt to follow) the typical practices. They start with trying to identify all of the sensitive data they have in their organizations – all of the data that exists on their internal network file shares, on endpoints, on removable media and in all of their cloud services. Then, they focus on how important the data is, i.e., the classifications of the information. Is the data confidential? Intellectual property? Important? The next step is determining who has access to the organization’s data. Finally, they seek to control or block when data leaves the organization.

This has been the accepted strategy across the security profession, and, frankly, there is a lot wrong with this model. The honest truth is it’s just not working because there is just too much data to successfully identify within the typical enterprise. According to the market research firm IDC, 80 percent of enterprise data will be unstructured by 2025. Let me tell you from experience, unless it is data that is obviously classified as personal health information, or card-payment information, then it is difficult, near impossible, for organizations (except maybe the military) to properly classify and rank their data, much less rely on employees to follow a prescribed classification scheme. They essentially rate everything as classified.

Consider our experience within Code42. We have about 500 employees, and, over the last 90 days, have logged a little over two-billion file events within our environment. This includes file edits, a file moved and similar activities. That’s not including events that are continuously occurring within every endpoint. When you consider that much data activity, it becomes clear how challenging it is to ask security professionals to understand who is accessing all of that data and where all of that data is flowing.

Imagine this traditional data-protection funnel as a data breach kill chain: What data do we have? What is the classification of our data? Who has access? What data left the organization and where did it go? As we’ve seen, this is a near-insurmountable challenge, unless an organization throws an enormous amount of resources at the problem and executes flawlessly. We know that’s not going to happen.

What’s the solution, then?

To start, we all need to acknowledge a few basic truths about corporate data:

  • All data is valuable, not just the data that we classify.
  • Every user – not just privileged users – have access to data.
  • Collaboration is constant, therefore, blocking will not work.

Given the above, organizations need to flip their approach to data security upside down and first tackle the data entering and leaving the organization. It is a much smaller subset of the total amount of data in an organization, and a vast improvement compared with having to scour more than two billion files at the top of the traditional data funnel. With the inverted funnel, we literally start with a much smaller set of files on any given day and can see if they are files that need more attention.

I’m sure there will be naysayers. But what the industry has been doing, including when it comes to insider risk, hasn’t been working. This approach is a lot more straightforward than having to apply an antiquated data management strategy using a constellation of technologies that must be applied near perfectly. Just look at data breaches year after year: In 2019, for instance, there were an estimated 3,950 data breaches, up from 2,013 in 2018, according to Verizon’s Data Breach Industry Report (DBIR).

Clearly, there are too many data breaches affecting too many organizations and their customers, and there is too much valuable intellectual property leaving organizations and entering others. This is happening because enterprises are looking at the wrong side of the funnel – and they can’t answer some of the most basic questions about their data as a result.

Bottom line: When it comes to protecting corporate data, organizations don’t have to boil the ocean. In fact, they shouldn’t even try. They need to focus on a much smaller data stream – the stream where their data is actually flowing.

Rob Juncker is CTO at Code42.

Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting our microsite.

Suggested articles

Discussion

  • Janine Darling on

    "In fact, they have their entire data-security approach backward...". A bold statement maybe but also truer than true. The big mother ship that is cybersecurity has begun to turn it's data security and privacy approach to the data itself. Finally. Because with 50Billion+ endpoints and more on the way (the pipes), there isn't a way to protect the data unless you're proactively protecting the data (the water inside the pipes). Let's get a megaphone out on this. Without a datacentric component as part of your security strategy, data is not secure. Period. Great (right on!) post.
  • Channin on

    I agree that a data-centric approach is required now. Cloud native organizations don't really have a traditional network, and data is constantly flowing among numerous service providers/third parties. It makes sense to watch what's going in and out first, but I'm not clear on whether you're suggesting abandoning data classification and other antiquated practices or focusing on them after you've established where the data is/should be flowing. If it's the former, will this approach allow for easy and complete response to requests for data deletion and other types of data owner requests, as well as managing internal access to ensure that data use is compliant?
  • TrevorX on

    While I don't disagree with the theory, the complexities of the implementation are what concern me. How do you monitor and control data egress without comprehensive monitoring and logging? You can't interrogate traffic at the network edge, as that won't capture any encrypted traffic (getting to the point where that should be most file transfers these days), and is useless for understanding how employees outside the corporate LAN are communicating with cloud storage. So you're proposing that organisations continue running comprehensive logging of all data access (both on-prem and cloud), but instead of attempting to analyse and categorise all data proactively and create rules around that structure, they instead focus on logs demonstrating data egress to and from an organisation? How do you determine when a file has been accessed by a local account for the purpose of emailing? You either have complete transparency of your email system (which is a potential privacy violation as well as having security implications) or your staff email activity is opaque, in which case you can't analyse it. The same goes for staff transferring files using a plethora of methods traversing encrypted endpoints. I am not using such examples as an argument against your proposal, merely demonstrating my lack of knowledge here - for your proposal to be applied you clearly use tools and technologies I am not familiar with, which I'd genuinely like to understand more about.

Leave A Reply to TrevorX Cancel Reply

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.