The Case for a Government Bug Bounty Program

Once upon an Internet, a security researcher who discovered a vulnerability had very limited options for what to do with that information. He could send it to the vendor and hope someone cared enough to patch it; he could post it to a mailing list for all to see; or, if he had the right contacts, he could attempt to sell it. The rise of vendor-sponsored bug bounty programs in recent years has changed that dynamic forever, providing a nice source of both recognition and income for security researchers. But the threat landscape may have already outstripped the existing reward systems, creating the need for an alternative.

Bug bounty programs have been a boon for both researchers and the vendors who sponsor them. From the researcher’s perspective, having a lucrative outlet for the work they put in finding vulnerabilities is an obvious win. Many researchers do this work on their own time, outside of their day jobs and with no promise of financial reward. (One could argue that no one is asking them to look for these vulnerabilities, so they shouldn’t expect any reward, but that’s a separate discussion. They are doing it, and it’s a net benefit in most cases.) The willingness of vendors such as Google, Facebook, PayPal, Barracuda, Mozilla and others to pay significant amounts of money to researchers who report vulnerabilities to them privately has given researchers both an incentive to find more vulnerabilities and a motivation to not go the full disclosure route.

For the vendors, bug bounty programs serve several purposes. They help establish good working relationships with researchers, increasing the likelihood that someone who finds a vulnerability in their products will come to them first. Rewards also serve as a relatively inexpensive way to identify and repair those vulnerabilities. Even at the high end of the scale, which is occupied by Google’s infrequent special rewards that can reach into the tens of thousands of dollars, the money is not a major expense for the companies. Most bounties are in the $1,000-$5,000 range.

However, those dollar figures are dwarfed by the ones available from the biggest bug bounty program of them all: the private vulnerability market. Researchers willing to go that route can make their year with just one sale. The prices vary greatly depending upon the product in which the vulnerability is found, as well as who the buyer is, but critical flaws in high-profile applications such as Internet Explorer or Windows can bring six figures. And there is no shortage of buyers in this game, with defense contractors, governments, private brokers and others all willing to pony up for good bugs. That level of money can be very attractive for a researcher, especially when the alternative is to report it to the vendor and perhaps get nothing in return other than an acknowledgement in a security bulletin from Microsoft.

Despite the money available from various sources, some researchers still wind up posting the details of their findings publicly for variety of reasons. Some may not have the contacts to sell a bug on the open market, others may be too young or otherwise ineligible for vendor reward programs and still others may have tried to go to the vendor with their bug and been rebuffed for some reason. This set of circumstances could be an opportunity for the federal government to step in and create its own separate bug reward program to take up the slack.

Certain government agencies already are buying vulnerabilities and exploits for offensive operations. But the opportunity here is for an organization such as US-CERT, a unit of the Department of Homeland Security, to offer reasonably significant rewards for vulnerability information to be used for defensive purposes. There are a large number of software vendors who don’t pay for vulnerabilities, and many of them produce applications that are critical to the operation of utilities, financial systems and government networks. DHS has a massive budget–a $39 billion request for fiscal 2014–and a tiny portion of that allocated to buy bugs from researchers could have a significant effect on the security of the nation’s networks. Once the government buys the vulnerability information, it could then work with the affected vendors on fixes, mitigations and notifications for customers before details are released.

If a researcher finds a vulnerability in a product covered by this program, however that would be defined, he would have the option of selling the information to the government rather than simply publicly disclosing it. This would also help keep some of these vulnerabilities off the private market where their eventual use is unknown at best.

US-CERT and the ICS-CERT already perform part of this function, working with researchers and vendors to coordinate patches and disclosure timelines. The difference in what they’d be doing would be negligible, but the effect could be huge. Manufacturers of SCADA and ICS (industrial control system) software have been notoriously slow to fix vulnerabilities and indifferent, if not outright hostile, to security researchers who try to report serious bugs to them. This would be a problem even if the applications in question were just desktop software, but these are the systems that control some of the country’s more vital networks. This is not a theoretical problem turning on possible vulnerabilities and speculative attacks. There are serious attacks against these systems occurring right now, and no one can afford for the vendors to sit on their hands any longer.

This plan certainly wouldn’t solve the entire problem. Nothing short of unicorns writing magical bug-free software will do that. And government involvement in security usually isn’t a desired outcome, but in this case it may be the best alternative.


Suggested articles