Adequate security metrics have seemingly been an unattainable goal, especially when it comes to software security. Too often, organizations simply rely on vulnerability counts for flaws disclosed in an operating system or popular application as a measure of its security.
But too often, variables intercede that make that a faulty exercise. Researchers from the University of Maryland published and presented research this week at the Research in Attacks, Intrusions and Defenses symposium in Sweden which they hope will introduce a new set of metrics based on vulnerability and exploit data collected in the real world.
Their paper, “Some Vulnerabilities Are Different Than Others: Studying Vulnerabilities and Attack Surfaces in the Wild”, they say disproves the notion that vulnerability counts are the best measure of OS and application security because most vulnerabilities are never exploited. Attack surface, too, is another misleading metric because users often increase it by adding applications onto an operating system, or changing configurations.
“The impact of such factors cannot be captured by existing security metrics, such as a product’s vulnerability count, or its theoretical attack surface,” the team of researchers–Kartik Nayak, Daniel Marino, Petros Efstathopoulos, Tudor Dumitras—wrote in their paper.
In response, the researchers propose two new metrics that can help measure whether disclosed vulnerabilities get exploited and two others that measure how often vulnerabilities are exploited. The respective proposed metrics are:
- A count of vulnerabilities exploited in the wild, compiled from a variety of reliable sources including the National Vulnerability Database and vendor IPS and antivirus signature databases;
- An exploitation ratio which is the proportion of disclosed vulnerabilities for a product within a certain time frame (for example, in the first couple of months after a version release) that captures the likelihood a vulnerability will be exploited
- Attack volume, which measures how frequently a product is attacked within a specific time frame.
- Exercised attack surface, which captures the area of attack surface targeted during a particular time frame, essentially revealing the number of vulnerabilities exploited on a host.
The researchers used a number of data sources in their experiments including the National Vulnerability Database and anvitirus and intrusion prevention data from more than 6.3 million hosts. They examined every version of Windows, from XP to Windows 7, as well as recent versions of Adobe Reader, Office and Internet Explorer, and concluded that fewer than 35 percent of disclosed vulnerabilities in any of those products are ever exploited. When combining all of those products, that number drops to 15 percent overall, and that ratio decreases with newer product releases.
For example, ASLR and DEP rollouts put Windows 7 in a much better position than earlier versions of the Windows OS. The same goes for later versions of Adobe Reader where sandboxing is introduced.
The researchers said it was important to sidestep lab results and cull their research only from field results.
“While the vulnerability count and the attack surface are metrics that capture the opportunities available to attackers, we instead focus on attempted, though not necessarily successful, attacks in the field,” the researchers wrote. “This new understanding, potentially combined with existing metrics, will enable a more accurate assessment of the risk of cyberattacks, by taking into account the vulnerabilities and attacks that are known to have an impact in the real world.”
The new metrics, the researchers said, can help system and network administrators get a more accurate risk assessment of their environments, and prioritize patching.