We have a serious sensor problem in the cybersecurity world. And it’s bad. Particularly when it comes to network intrusion detection and prevention sensors (IDS/IPS). It seems like many security operations center (SOC) teams have completely given up on them being effective. But is the problem with sensor efficacy, or is it in how these sensors have been architected, managed and applied in the environment?
The answer is that three specific challenges are causing this problem, including:
- The managed security services provider (MSSP) that takes the position that less data is better. Why do they make this case? For one, there is a human bottleneck that naturally occurs when large volumes of data must be analyzed. More data means higher costs and more time required to analyze it, which does not traditionally bode well for an MSSP business model. More fundamentally, people have an upper limit of streaming data analysis they can handle that is well below the upper limit of a machine.
- The converged device phenomenon. It’s easier to include the sensor on a firewall when it’s all one converged device that can be managed together as a single package. However, this is very often not the best place to put a sensor. I call this weaponizing the perimeter, but lateral detection and decrypted monitoring zones are important best practices that are often ignored.
- And finally, compliance is our own worst enemy. Network monitoring is essential, but an organization can still be compliant if it barely works, and my analysis of the industry shows that 70 to 85 percent of sensors deployed are not performing for their owners. While compliance is an important aspect of any cybersecurity program, just being compliant does not equate to a secure environment.
Visibility, Sensitivity and Defensive Utility
There are three aspects to consider when evaluating network sensor grids, including visibility, sensitivity and defensive utility. Scientific, data-driven management of the sensor grid will be able to measure a few key performance characteristics, including the volume of alerts generated and total traffic seen (visibility), the number and diversity of signatures that alarm (sensitivity), and whether or not SOCs recognize and can react to real incidents (defensive utility).
Let’s start with visibility. What do you want to see? Actually, the question should be “who do you want to see?” because servers very rarely click on links without a user’s involvement. Today, most attacks occur due to a user clicking on a link or a malicious insider’s cooperation, especially in the case of ransomware. This is another reason to deploy lateral sensors that are looking for the attacker’s reconnaissance and lateral movement within your network. This actually leads us nicely to the next topic, sensitivity.
Sensitivity in a network sensor is directly related to the number, diversity and effectiveness of signatures enabled on your devices. There are approximately 20,000 signatures, and maybe 2,500 of those are modern and relevant. However, many companies, especially MSSPs, will only enable 100 or fewer. That means MSSP customers are paying for a sensor where 0.5 percent of the total sensitivity or 4 percent of the assumed relevant sensitivity is enabled for active monitoring. MSSPs only do this to reduce the volume to enable human security analysts to manage the data generated by these sensors. These devices exist to alarm on potentially malicious activity, but we have essentially blinded them, significantly reducing their value in the process.
Another factor related to sensitivity in a network sensor is ongoing signature tuning. Many teams will investigate a signature alert, determine that it is a false positive, and then disable the signature forever going forward. This is a terrible and dangerous practice as I routinely see specific signatures result in a complex mix of false-positive, true-positive or non-actionable resolution. A single signature cannot be dismissed simply because it alarmed falsely once; you need context and situational information in every instance to make that determination.
Next is defensive utility, which is a fancy way of saying, “do we zap the bad guys?” I am aware of and have seen a vast number of sensors that have not detected anything in months or even years. Why do we pay for monitoring solutions and annual maintenance for such poor outcomes?
Often there is an additional complication in the management of these devices that also impacts their defensive utility. That’s the fact that they are managed by many IT departments that are unconnected to their defensive security purpose. This makes them hard to deal with in terms of constant tuning. I have come across hundreds of security professionals who have given up on their IT department, not their sensors.
Take a Better Approach
Several challenges reduce the efficacy of sensors we deploy in our environments that have the potential to significantly impact our security posture. However, many SOCs are perhaps taking the wrong approach to the placement and configuration of their sensors, essentially reducing the visibility of what is occurring in the environment.
This may include tuning sensors down, treating false positives the same regardless of context and allowing a limited number of signatures that sensors are enabled for detection.
To address these issues, SOC teams need to look at solutions that provide visibility into the health stature of their sensors. This will not only increase the security posture of the environment but also ensure they are recognizing an appropriate return on investment of their sensors.
Chris Calvert is CTO and co-founder at Respond Software – Now a part of FireEye.
Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting our microsite.