The Problem With Bug Counts

It’s getting to be that time of year again, when everyone starts looking for ways to do something with all of the data that they’ve accumulated during the last 12 months. That means reports and top tens and lists and rankings and controversy. And, inevitably, it also means more examples of why using bug counts as a measure of an application’s security isn’t of much use.

BugcountIt’s getting to be that time of year again, when everyone starts looking for ways to do something with all of the data that they’ve accumulated during the last 12 months. That means reports and top tens and lists and rankings and controversy. And, inevitably, it also means more examples of why using bug counts as a measure of an application’s security isn’t of much use.

This week’s example comes from the controversy over a press release from security vendor Bit9 that listed Google Chrome as the most vulnerable application of 2010. The list, called the “Dirty Dozen,” shows that Chrome had 76 reported high severity vulnerabilities during the calendar year, with Apple Safari, Microsoft Office, Adobe Acrobat/Reader and Mozilla Firefox filling out the top five. The rankings are based simply on the number of bugs listed in NIST’s National Vulnerability Database for each application. And that’s pretty much the extent of the report. No deeper analysis or context, just a list of bug counts for the applications.

The list is fine for what it is, which is just that, a list. Nothing more. Nor should it be taken as anything other than a statistical ranking, because as work done by a number of security researchers over the last few years has shown, bug counts are an incredibly limited method for assessing the security of an application. (For good background on why this is so, see Steve Christey’s explanation from 2006).

One of the main flaws in flaw-counting contests is that they typically only include bugs that are publicly reported. Many vendors have internal security teams that pick apart their products before during and after release and are constantly finding vulnerabilities. Those bugs very often are fixed silently in service packs or minor point releases and are never disclosed publicly. There’s no way of knowing how many bugs are found and fixed that way, but it could be a significant number.

In fact, Google is one of the few vendors that does publicly announce bugs found by its own researchers. Look through any of the blog posts from Google on new Chrome releases and you’ll likely find mention of bugs found by the company’s research team, which includes a number of well-respected researchers such as Michal Zalewski, Tavis Ormandy, Chris Evans and Neel Mehta, among others. Most vendors don’t do that, which makes it virtually impossible to account for internally discovered vulnerabilities.

Another problem with this methodology is the wide variance in how software makers categorize bugs. Some vendors don’t consider certain classes of bugs to be security related and will give them a lower priority as a result. And in some cases that means that the vendor won’t include those bugs in external reports or notes when releasing updates or new versions.

But probably the biggest issue with big counts as a measure of security is that they don’t actually measure security in any real way. As Marc Maiffret of eEye Digital Security points out, bugs don’t exist in a vacuum.

“This is simply because while many vulnerabilities might exist for
Chrome, there are very few exploits for Chrome vulnerabilities compared
to Adobe. That is to say that while Chrome has more vulnerabilities than
Adobe, it does not have nearly the amount of malicious code in the wild
to leverage those vulnerabilities. This is partially due to the fact
that Chrome was developed with security in mind and is backed by
Google’s research team whom simply are some of the brightest minds in
the business. That is why Chrome has had various sandboxing and
hardening technologies within it for a while now and companies like
Adobe are just getting around to it,” Maiffret wrote in a blog post.

“When striving to understand what the risk level of various
applications are you cannot simply count the number of vulnerabilities
as no two vulnerabilities are created equally. There are many other
factors that go into properly assessing the risk of software being used
within your business. The time it takes a vendor to patch a
vulnerability (both zero-day and ‘responsible’), the split between
vendor and third-party discovered vulnerabilities, how many
vulnerabilities a vendor silently patches, etc.”

In short, all the pieces matter and there are simply too many of them to take one of them on its own and use it as an absolute measure of anything.

Suggested articles