This Week In Security: The Full Disclosure Rabbit Hole Re-Opens

Had you gone to sleep in 2004 and woken up three days ago, you’d be forgiven for thinking you’d only slept a few hours instead of a few years. This week saw the inglorious return of not just the full disclosure debate, but also of the heated rhetoric that usually accompanies it. Had you awoken to a mix of Maroon 5 and Hoobastank on your iPod, the illusion would’ve been complete. Read on for the full week in review.

Had you gone to sleep in 2004 and woken up three days ago, you’d be forgiven for thinking you’d only slept a few hours instead of a few years. This week saw the inglorious return of not just the full disclosure debate, but also of the heated rhetoric that usually accompanies it. Had you awoken to a mix of Maroon 5 and Hoobastank on your iPod, the illusion would’ve been complete. Read on for the full week in review.

The news this week was dominated by two main stories: the critical flaw in Adobe Reader, Adobe Flash and Adobe Acrobat and the zero-day vulnerability in Windows Help and Support Center disclosed by a security researcher who works for Google. Both were messy and both created quite a bit of discussion in the security community.

Worst things first: the Google-Microsoft tete-a-tete. The story, which may or may not actually be about Google and Microsoft, is the result of work done by Tavis Ormandy, a security researcher who discovered a serious vulnerability in the Windows Help and Support Center. Ormandy notified Microsoft of the flaw late last week, and then posted full details of the problem, along with working exploit code, on the Full Disclosure mailing list five days later. Not much new there; a flaw, a notification and a disclosure, if a quick one.

The problematic bit is that Ormandy happens to work for Google. Needless to say, this chain of events did not sit well with Microsoft, whose security response team went on the offensive in a blog post.

This issue was reported to us on June 5, 2010 by a
Google security researcher and then made public less than four days
later, on June 9, 2010.  Public disclosure of the details
of this vulnerability and how to exploit it, without giving us time to
resolve the issue for our potentially affected customers, makes broad
attacks more likely and puts customers at risk.

One of the main reasons we and many others across the industry
advocate for responsible disclosure is that the software vendor who
wrote the code is in the best position to fully understand the root
cause. While this was a good find by the Google researcher, it turns out
that the analysis is incomplete and the actual workaround Google
suggested is easily circumvented. In some cases, more time is required
for a comprehensive update that cannot be bypassed, and does not cause
quality problems. 

Ormandy, like other researchers in similar positions, was careful in his advisory to say that this work was his own and that he was not speaking as a Google employee. But, the folks in Redmond know who he is and where he works, as do his peers in the research community. Robert Hansen took Ormandy and Google to task for the quick disclosure. “Google says it adheres to responsible disclosure, but at the same time
they give Microsoft 5 days to fix their 0day that Google’s researchers
themselves created!” he wrote. Other researchers had little patience for the criticisms of Ormandy or Google.

In the end, the mess likely is less about Microsoft and Google’s unfriendly rivalry than it is about the continued disagreement among reasonable people about the ethics of full disclosure. I have absolutely no comment on that. But, like they say, a man’s got to have a code.

The other major story this week was the critical flaw affecting Adobe’s Flash, Reader and Acrobat. The vulnerability was made public at the end of last week and Adobe said on Tuesday that it would be releasing a fix for Flash later in the week, but would not have patches ready for the other products until a later date. The quick action on Flash was brought about by the news that there were active attacks already ongoing against the vulnerability. Adobe also took the opportunity to patch 31 other Flash vulnerabilities, an enormous load of fixes, many of which were for code-execution vulnerabilities.

Others receiving votes:

Mass SQL Injection Attack Hits Sites Running IIS

Scareware Plague Continues Despite $163,175,539.95 Bust

Apple’s Worst Security Breach: 114,000 iPad Owners Exposed

Suggested articles

Discussion

  • Corrector on

    But, like they say, a man's got to have a code.

    A code doesn't have to be a dumb rule like :

    • Always tell the world immediately.
    • Always tell the vendor and no one else.

    A researcher must take into account :

    • the documented procedure and past behavior of the software company : do they take security alerts seriously? do they threatened researchers? do they attempt to minimize bad press by not revealing the reach of problems?
    • how much time do they take to react? do they try to justify why they need more time to fix some issue?
    • is it an implementation issue (like buffer overflow) or a fondamental design issue?
    • is it fixable at all? (without removing features)

    Extreme examples :

    • Apple have threaten researchers in the past. Thus, Apple does not deserve to be "responsible disclosed". Apple customers would know about this cult's crazy rhetorics if they cared. It is unethical and irresponsible to "responsible disclosed" Apple.
    • When Vista was published, MS made an indecent amount of publicity of the security of Vista (a French corrupted press, that is to say most French press, took their word for it and repeated the message that Vista is extremely secure like the minions they are)
    • MS have tried to minimize a very serious SSL issue, saying among other stupidities that people should check the certificate chain of signature themselves!!!
    • Also, they believe that fixing a vuln (even remotely exploitable) can wait a month.

    Some software editors need a hard spanking, such a full disclosure.

    People who say researchers doing FD should be sued need an extra hard spanking, such as loosing money, data, or their jobs because of security vulnerabilities.

  • Anonymous on

    The full disclosure debate was largely a creation *of* Microsoft back in the days (circa 2000-2004) when their products were being routinely torn to shreds in public and their share holders were beginning to notice security at all. To be fair, yes ofcourse Microsoft has made great progress since then and they have exhibited exemplary vendor response in many cases.

    But that isn't necessarily what "full disclosure" is all about. Although it has been made exceedingly unpopular (probably largely because of the vendors who have "jumped on the security bandwagon"), some people nonetheless still believe that detailed information on vulnerabilities should be made known publicly as soon as possible. That users can take action even before vendors do. That by keeping information private, you just give the real bad guys a better chance of exploiting the vulnerabilities before anybody knows about them or how to address them.

    I applaud Ormandy for his fine work describing this vulnerability and workarounds and do not judge him for his reasons for doing so. Also... this was not an "0day that Google’s researchers themselves created!". Microsoft is responsible for the "creation" of this vulnerability and no one else. I wish more security researchers had the courage to stick to their ideals.

     

  • Anonymous on

    @Anonymous - correct, Microsoft made the vulnerability, a Google engineer just told everyone how to exploit it. I don't suppose Google will mind when the security industry does the same to them, in that case. :)
  • Corrector on

    I don't suppose Google will mind when the security industry does the same to them, in that case.

    What "same"?

  • Anonymous on

    I understand Tavis at 100%. I reported a bug at microsoft a while ago, and I never get an answer.

    They seem to patch when people exploit already the code, when it's a simple report that is left unknown then they seem to wait a service pack to patch it incognico. (I understand that way too, as security and releasing security bulletin take people to do and cost money)

    And the microsoft policy is clear, they aknowledge people when they make a security bulletin, else if they patch it like with a service pack they don't reference people.

    If they can't aknowledge people correctly and fix their product fast, then situation like that will always happen.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.