Why Vulnerability Research Matters

It seems that any time there’s a high-profile incident in which a vulnerability is disclosed without a patch being available, there is an immediate and loud call from some corners to abolish the practice of vulnerability research. If researchers weren’t spending their days poking holes in software, the bad guys wouldn’t have so many flaws to exploit and we’d all be safer, this argument goes. But the plain fact is that all of us–users and vendors alike–are far better off because of the work researchers do.

It seems that any time there’s a high-profile incident in which a vulnerability is disclosed without a patch being available, there is an immediate and loud call from some corners to abolish the practice of vulnerability research. If researchers weren’t spending their days poking holes in software, the bad guys wouldn’t have so many flaws to exploit and we’d all be safer, this argument goes. But the plain fact is that all of us–users and vendors alike–are far better off because of the work researchers do.

When something breaks, the easiest and most logical reaction is to have a look around, see who broke it and dump the blame on him. For years that was exactly the tack that software vendors took when responding to a report of a new vulnerability in one of their products: blame the finder. A spokesman for the affected vendor would come out and publicly rail against the researcher, using trigger words such as “irresponsible” or “reckless” to make sure that their point came across loud and clear.

The rhetoric could sometimes reach an absurd level, with some vendors such as Oracle refusing to deal with vulnerability researchers at all. And for the most part, the vendors accomplished what they set out to do. They succeeded in painting researchers who didn’t hew to their guidelines as rogues who were interested only in fame and self-aggrandizement. And as a side effect, some people within the security industry began saying loudly and often that vulnerability researchers were not only wasting their time because they’d never find every bug in every piece of software, but they were exposing users to unnecessary risks by publishing information on flaws.

This line of thinking has a couple of flaws, the most obvious one being the assumption that the researcher who disclosed a given flaw is the only one who knew about it before his disclosure. This assumption is not just naive, it’s dangerous. Thinking that there’s no one else out there who knows the details of a given zero-day flaw is one of the things that leads to ridiculously long gaps between disclosure and the release of a patch. Even in the case of a vulnerability for which all of the details aren’t public, a bit of information combined with a short window of time before a patch is available can give attackers the head start they need to launch mass exploits.

The reality is that a responsible vendor must assume that attackers knew about a given flaw before it was disclosed. This may not always be the case, but vendors simply have to assume that it is. Consider the cases of the recently patched critical vulnerability in Adobe Reader and the huge Java bug that was disclosed in April. In the case of the Reader flaw, Charlie Miller and Tavis Ormandy each discovered the vulnerability independently. And in the case of the Java bug, Ormandy and Ruben Santamarta each found the flaw at nearly the same time.

So in order to make the no-one-else-knew argument hold up, you have to assume that the only two people on Earth who found these bugs came forward and reported them. No thanks.

A second, perhaps less-obvious flaw, in the argument that vulnerability research is harmful is that computer systems aren’t like most other consumer products. They break in weird and unexpected ways and in many cases, the weaknesses and design flaws that lead to those failures aren’t obvious during the design and QA processes. It’s often not until software gets onto production systems and users start putting it through its paces that you begin to see where the weak points lie.

This leads us to the old saw that in order to understand how an attacker might try to break a piece of software, you need to hire people who think like attackers and let them try to break it. This sounds like a great way to sell consulting and pen-testing services, and in a lot of cases that’s exactly what it is. But unlike many other sales pitches, there’s a lot of truth to it.

The clearest evidence of this is the fact that Microsoft, Google and other vendors in recent years have combined to hire a large number of the top vulnerability researchers in the world. Microsoft has hired Ken Johnson and Matt Miller, among others. Google has brought on Michal Zalewski, Ormandy, Neel Mehta and Julien Tinnes. And those are just the in-house researchers. Many vendors also regularly bring in independent researchers and boutique consulting firms to have a go at their applications before release.

The value here is clear, but even the best research teams in the industry won’t find everything. They’re constrained by time, production schedules and sometimes by the parameters that the vendor has set up for them. Feel free to break this and that, but don’t look over here. This process is even more difficult when the target is a Web-based application.

So the work that independent, third-party researchers do is a vital link in this chain. It not only provides an external feedback loop for the vendors, showing them how their products are likely being attacked in the real world, but it also serves to keep the vendors on their toes, ensuring that they don’t become complacent with their own development and testing methods.

Vendors and users may not always like the way that researchers go about their work or the methods they employ for disclosing their results and getting software makers to act on them. But the net effect is still overwhelmingly positive, and we need more of that work, not less. Because the attackers certainly aren’t slowing down.

Suggested articles