The two most highly publicized vulnerability disclosures last year also were the most highly criticized disclosures: Dan Kaminsky’s DNS bug and the SSL flaw discovered by a group of independent and academic researchers. The two events played out in similar fashions, with some details coming out in advance of the full disclosures, a partial disclosure, if you will. And that’s where the trouble started.
But it’s also where the two stories diverge.
In the case of Kaminsky’s work on the DNS vulnerability, he spent a lot of time working behind the scenes with Microsoft, DNS experts, US-CERT and other interested parties to give them the details of the problem and put together a coordinated patching process. Everything worked well and all of the parties agreed on a schedule that was workable. Then, word began to spread that a major DNS problem had been found and Kaminsky himself began talking a bit about it, without revealing any of the details.
Kaminsky (right) planned to talk about all of the details of the bug at the Black Hat conference in August, but in the months leading up to his talk people accused him of over-hyping his findings and said he didn’t have the goods. So he briefed several other researchers on what he had, and they all confirmed the seriousness of the problem. But then one of them inadvertently let the details slip in a blog post.
And then things began to spiral out of control a bit, as Kaminsky had to come up with some more details of the bug and eventually the cat clawed its way completely out of the bag. But by then, a large portion of the vulnerable DNS servers had been patched and the publicity surrounding the mistaken disclosure probably ended up pushing some of the laggards to get the patch installed.
The circumstances were entirely different in the case of the SSL flaw. This was partial disclosure by design. In December, a team of researchers that included Alexander Sotirov, Jacob Applebaum, Marc Stevens and several others, announced that they had found a serious, Internet-level flaw and had designed a workable attack to exploit the problem. They invited speculation on what the problem was and promised that it would be big news when they revealed their results at a German security conference at the end of the month. They also briefed a number of reporters (including me) on the full details of the attack ahead of the release. Sotirov and his team took some heat on security mailing lists and forums for their coyness and partial disclosure, but then once the details of the attack came out, all seemed to be forgiven.
The seriousness of the attack, which involved creating a rogue Certification Authority to compromise the SSL infrastructure, was evident and the team won praise for having worked with browser vendors and certificate authorities to remedy the problem before they announced their results publicly.
So, although the circumstances of the two disclosures were different, the end result was the same: some details were revealed before the full disclosure. This has led to a revival of the eons-old disclosure debate, albeit with a slightly different twist this time around. In the early part of this decade, it was common practice for researchers to publish full details of a new flaw—including exploit code. Most said they did it to pressure vendors to move faster on patches. And it often worked. (See: Microsoft and Oracle.) But a backlash against this practice built up and it soon became the exception rather than the norm.
Now, researchers are going down the same road again, but in many cases it’s tied to an appearance at a conference or the publication of a paper or book. It’s a classic technique used for decades by book publishers and movie studios: Give the people a little taste and build up some demand for the big show. A lot of people in the industry have a problem with this, but I’m not one of them.
The vast majority of this kind of security research is done as a hobby by people who don’t get paid for it. Kaminsky does his DNS work as a side project. Sotirov is an independent researcher and many of his peers on the SSL attack team are academics. They do this work out of intellectual curiosity and if they can find some way to profit from it while still taking the necessary steps to protect vulnerable users, fine. Certainly, guys like Kaminsky, Sotirov, David Litchfield and others have gained a lot of recognition from their work, so it’s not all for the good of mankind.
And it’s a complex issue. Even the researchers themselves seem to be of two minds on it. Kaminsky said as much on his blog after the DNS disclosure process:
Partial disclosure has always been looked down upon, rightfully so, because it’s so amazingly easy to abuse. But if our goal is to protect customers, and one particular bug will affect almost all of them, and a phased disclosure of information will protect the greatest number of customers possible — then perhaps there’s a place for this mode.
It’s certainly not a path you can safely decide to take by yourself, however. That’s what I did, when I refused to tell anyone else in the security industry what the bug was. It’s not something I’d ever do again. It’s not just that you can’t vouch for your own bugs. It’s that, without peer review, you don’t know what bugs people are going to think you’re recapitulating, and you even don’t really understand the severity of your issue.
This discussion likely will continue ad infinitum, and there will be a good extension of it at the SOURCE Boston conference next week on a panel that will include Sotirov, Dino Dai Zovi, Kaminsky, Katie Moussouris of Microsoft and Ivan Arce of Core Security. Don’t expect a resolution, but count on some heated debate.