Andy Zmolek of Avaya reports on VoIP security research company VoIPshield‘s new policy requiring vendors to pay for full details of bugs in their products. He quotes from a letter VoIPShield sent him:
“I wanted to inform you that VoIPshield is making significant changes to its Vulnerabilities Disclosure Policy to VoIP products vendors. Effective immediately, we will no longer make voluntary disclosures of vulnerabilities to Avaya or any other vendor. Instead, the results of the vulnerability research performed by VoIPshield Labs, including technical descriptions, exploit code and other elements necessary to recreate and test the vulnerabilities in your lab, is available to be licensed from VoIPshield for use by Avaya on an annual subscription basis.
“It is VoIPshield’s intention to continue to disclose all vulnerabilities to the public at a summary level, in a manner similar to what we’ve done in the past. We will also make more detailed vulnerability information available to enterprise security professionals, and even more detailed information available to security products companies, both for an annual subscription fee.”
In comments, Rick Dalmazzi from VoIPshield responded at length. Quoting some of it:
VoIPshield has what I believe to be the most comprehensive database of VoIP application vulnerabilities in existence. It is the result of almost 5 years of dedicated research in this area. To date that vulnerability content has only been available to the industry through our products, VoIPaudit Vulnerability Assessment System and VoIPguard Intrusion Prevention System.
Later this month we plan to make this content available to the entire industry through an on-line subscription service, the working name of which is VoIPshield “V-Portal” Vulnerability Information Database. There will be four levels of access (casual observer; security professional; security products vendor; and VoIP products vendor), each with successively more detailed information about the vulnerabilities. The first level of access (summary vulnerability information, similar to what’s on our website presently) will be free. The other levels will be available for an annual subscription fee. Access to each level of content will be to qualified users only, and requests for subscription will be rigorously screened.
So no, Mr. Zmolek, Avaya doesn’t “have to” pay us for anything. We do not “require” payment from you. It’s Avaya’s choice if you want to acquire the results of years of work by VoIPshield. It’s a business decision that your company will have to make. VoIPshield has made a business decision to not give away that work for free.
It turns out that the security industry “best practice” of researchers giving away their work to vendors seems to work “best” for the vendors and not so well for the research companies, especially the small ones who are trying to pioneer into new areas.
As a researcher myself—though in a different area—I can certainly understand Dalmazzi’s desire to monetize the results of his company’s research. One of my friends used to quote Danny DeVito from Heist on this point: “Everybody needs money. That’s why they call it money.” That said, I think his defense of this policy elides some important points.
First, security issues are different from ordinary research results. Suppose, for instance, that Researcher had discovered a way to significantly improve the performance of Vendor’s product. They could tell Vendor and offer to sell it to them. At this point, Vendor’s decision matrix would look like this:
|0||V – C|
Where V is the value of the performance improvement to them and C is the price they pay to Researcher for the information. Now, if Researcher is willing to charge a low enough price, they have a deal and it’s a win-win. Otherwise, Vendor’s payoff is zero. In no case is Vendor really worse off.
The situation with security issues is different, however. As I read this message, Researcher will continue to look for issues in Vendor’s products regardless of whether Vendor pays them. They’ll be disclosing this vulnerabilities in progressively more detail to people who pay them progressively more money. Regardless of what vetting procedure Researcher uses (and “qualified users” really doesn’t tell us that much, especially as “security professional” seems like a pretty loose term), the probability that potential attackers will end up in possession of detailed vulnerability information seems pretty high. First, information like this tends to leak out. Second, even a loose description of where a vulnerability is in a piece of software really helps when you go to find it for yourself, so even summary information increases the chance that someone will exploit the vulnerability. We need to expand our payoff matrix as follows:
|Not Disclose||0||V – C|
The first line of the table, corresponding to a scenario in which Researcher doesn’t disclose the vulnerability to anyone besides Vendor, looks the same as the previous payoff matrix: Vendor can decide whether or not to buy the information depending on whether it’s worth it to them or not to fix the issue [and it’s quite plausible that it’s not worth it to them, as I’ll discuss in a minute.] However, the bottom line on the table looks quite different: if Researcher discloses the issue, then this increases the chance that someone else will develop an exploit and attack Vendor’s customers, thus costing Vendor D. This is true regardless of whether or not Vendor chooses to pay Researcher for more information on the issue. If Vendor chooses to pay Researcher, they get an opportunity to mitigate this damage to some extent by rolling out a fix, but their customers are still likely suffering some increased risk due to the disclosure. I’ve marked the lower right (Buy/Disclose) cell with a ? because the costs here are a bit hard to calculate. It’s natural to think it’s V – C – D but it’s not clear that that’s true, since presumably knowing the details of the vulnerability is of more value if you know it’s going to be released—though by less than D, since you’d be better off if you knew the details but nobody else did. In any case, from Vendor’s perspective the top row of the matrix dominates the bottom row.
The point of all this is that the situation with vulnerabilities is more complicated: Researcher is unilaterally imposing a cost on Vendor by choosing to disclose vulnerabilities in their system and they’re leaving it up to Vendor whether they would like to minimize that cost by paying Researcher some money for details on the vulnerability. So it’s rather less of a great opportunity to be allowed to pay for vulnerability details than it is to be offered a cool new optimization.
The second point I wanted to make is that Dalmazzi’s suggetion that VoIPshield is just doing Avaya’s QA for them and that they should have found this stuff through their own QA processes doesn’t really seem right:
Final note to Mr. Zmolek. From my discussions with enterprise VoIP users, including your customers, what they want is bug-free products from their vendors. So now VoIP vendors have a choice: they can invest in their own QA group, or they can outsource that function to us. Because in the end, a security vulnerability is just an application bug that should have been caught prior to product release. If my small company can do it, surely a large, important company like Avaya can do it.
All software has bugs and there’s no evidence that it’s practical to purge your software of security vulnerabilities by any plausible QA program, whether that program consists of testing, code audits, or whatever. This isn’t to express an opinion on the quality of Avaya’s code, which I haven’t seen; I’m just talking about what seems possible given the state of the art. With that in mind, we should expect that with enough effort researchers will be able to find vulnerabilities in any vendor’s code base. Sure, the vendor could find some vulnerabilities too, but the question is whether they can find enough bugs that researchers can’t find any. There’s no evidence that that’s the case.
Finally, I should note that from the perspective of general social welfare, disclosing vulnerabilities to a bunch of people who aren’t the vendor but not the vendor seems fairly suboptimal. The consequence is that there’s a substantial risk of attack which the vendor can’t mitigate. Of course, this isn’t the researcher’s preferred option—they would rather collect money from the vendor as well—but if they have to do it occasionally in order to maintain a credible negotiating position, that has some fairly high negative externalities. Obviously, this argument doesn’t apply to researchers who always give the vendor full information. There’s an active debate about the socially optimal terms of disclosure, but I think it’s reasonably clear that a situation where vulnerabilities are frequently disclosed to a large group of people but not to the vendors isn’t really optimal.
Eric Rescorla is a security consultant and founder of RTFM. This essay originally appeared on Educated Guesswork and is reposted here with permission.