When researcher Kevin Finisterre found a security error in drone-maker DJI’s systems enabling him to access flight log data and images of customers, he thought he had hit the $30,000 jackpot as part of the drone company’s newly announced bug bounty program.
Instead, when the incident occurred in 2017, he was met with thinly-veiled threats from the company. The incident shows that bug bounty programs still have ways to go when it comes to legal “safe harbor” terms – conditions clearly outlining how researchers, who are acting in good faith, can report bugs without facing legal repercussions.
“Against my better judgement – acting on good faith – I submitted something before the bounty was solidified,” Finisterre, who ended up walking away from the prize and publishing his findings publicly, told Threatpost. “I won’t make that mistake again. It almost completely turned me away from doing bounty programs again in the future.”
Since the launch of the Hack the Pentagon program in 2016, bug bounty programs have quickly grown in popularity. Bugcrowd’s State of Bug Bounty report this year found that the number of programs launched in the past year has jumped by 40 percent.
That includes players such as Google, Facebook, and Microsoft offering high rewards – and with good reason. The programs have helped unearth important vulnerabilities, including a serious flaw in Chrome on Google’s Pixel in 2018 and a massive Facebook remote code execution flaw in 2017.
As for participants, bug bounties can be life changing – in fact, top earning researchers make 2.7 times the median salary of a software engineer in their home country. Across all programs, the average payout per vulnerability is $781, a 73 percent increase over last year, according to Bugcrowd’s report.
“The numbers have exploded,” Marten Mickos, CEO of HackerOne, told Threatpost. “There’s a larger number of researchers participating than ever.”
But, cases like Finisterre’s and other infamous bug bounty incidents have put both hackers – and companies – in legal harm’s way, and led to a tightened speculation around the need for legalese and responsible disclosure language within programs.
“I think that the rubber is hitting the road again and again,” Christie Terrill, partner at Bishop Fox, told Threatpost. “When you look at companies trying to shift criminal and civil liabilities to the researchers themselves, instead of creating a safe harbor for these researchers, you’re creating a bad situation.”
Past Issues In Programs
For researchers, white hat hackers and others participating in bug bounty programs, examples such as the DJI as well as others have led to worries around the level of trust around responsible vulnerability disclosure for both parties.
Similar issues surrounding the DJI incident have popped up in the news over the years. In those cases both the companies and the participants were put at risk, with problems stemming from vague language in bounty programs legal language.
One notable issue came to a head in 2015, when researcher Wes Wineberg alleged that Facebook’s security chief threatened legal action against him after he discovered a flaw in Facebook-owned Instagram.
These incidents have impacted the level of trust between researchers and vendors, said Terrill – particularly when it comes to legal issues, which large corporations may have the upper hand on over lone bounty hunters.
“Corporations dole out contracts all the time trying to shift liabilities to vendors and third parties… That’s common sense,” said Terrill. “But it’s not fair doing that when you’re comparing corporations to individual security researchers. Most researchers don’t have cyber liability insurance, they don’t have a legal team behind them.”
Listen: Threatpost talks to HackerOne’s CEO about “safe harbor” terms.
These legal issues were a big problem for Finisterre after receiving an initial bug bounty contract from DJI – which had concerning requirements, such as barring him or people he knew to speak ill of the drone company.
However, the legal route for going back and forth on the contract was too costly and ultimately not worth it, Finisterre said: “It [the contract] clearly offered me no protections. It was slanted toward protecting them,” Finisterre said. “I had some quotes on what it would cost to get it edited [by a third-party legal team] and ultimately it was a lot more money than I was willing to invest.”
Importantly, it’s not just researchers who are potentially at risk when it comes to a lack of safe harbor language in bug bounty programs. In 2016, Uber paid out a ransom to hackers who stole millions of user credentials– under the guise of bug bounty awards.
In a congressional hearing in February 2018, Uber CISO John Flynn confirmed that the man behind the breach of 57 million user data was paid by Uber to destroy the data through its bug-bounty program. The case shows that bug bounty issues can also hurt the company, particularly if policies separating “good faith” hacking versus “extortion” are not made clear.
“We want a commitment to keep [researchers] safe as long as they hack with good intent… but we also have to make the rules of conduct black and white… in case hackers ever stray away from good conduct and intent,” said HackerOne’s Mickos.
The Rise of Safe Harbor Language
To combat these types of bug bounty issues, companies can try to be as clear as possible in their bug bounty paperwork to ensure hackers have a clear understanding of the legal terms and conditions for hacking their systems.
At this point, most companies still lack legal safe-harbor language in their contracts that allow hackers to compete among themselves to find security vulnerabilities. Like a competitive sport, hackers can be competitive and work hard for bragging rights when finding vulnerabilities – and also for the bug bounty cash.
Too often companies make vague statements, in lieu of specific safe harbor provisions. Those statements pussyfoot around specifics and oblige hackers not to violate laws. Experts say absent a cogent safe harbor provision, companies need to reference specific legal limitations of which systems can’t be tested; as well as giving specific authorization to access other systems.
Amit Elazari, an expert in the policies and legalese surrounding bug bounty programs, told Threatpost that, as of June, only ten companies- out of hundreds or thousands – include near-perfect safe-harbor policy terms.
That list includes Dropbox, DJI (which adopted the terms after the 2017 incident), Ed, LegalRobot, Keeper, HackerOne, Upserve, Zomato, RightMesh and Bugcrowd. Other companies, such as Uber, have adopted “partial safe harbor terms,” she said.
Specifically, a bug bounty program with safe harbor language would outline specific authorization, with clear scope, around anti-hacking laws such as the Computer Fraud and Abuse Act (CFAA) or the Digital Millennium Copyright Act (DMCA).
For Edwin Foudil, a security researcher who usually goes under the alias “EdOverflow,” safe harbor language is not just a legal necessity – it builds trust in the company deploying the bug bounty program and shows that they “really value the security researcher’s input,” he said.
“Safe harbors are welcoming and demonstrate that the company behind the bug bounty program are security-conscious,” he said. “I know that if I stick to the guidelines, the company will not come chasing after me and pursue legal action. Nowadays, I receive multiple invitations to bug bounty programs a day, and those that have safe harbors stand out immediately and are more likely to be the program that I will focus on.”
“I actually look for sections titled ‘safe harbor’ or some verbiage describing how participating in the program and abiding by the rules will not put me at risk,” he said. “In fact, in some cases, the company even makes it explicitly clear that if a third-party initiate legal action against me as a researcher and I was following the rules, they will step in and ensure that this is clear to all parties involved.
Dropbox is considered to be a leader in implementing a safe harbor-compliant bug bounty program. The company’s bug bounty program guidelines clearly outline “applications in scope,” responsible disclosure terms, “out-of-scope vulnerabilities,” and “consequences of complying with this policy.”
“We will not pursue civil action or initiate a complaint to law enforcement for accidental, good faith violations of this policy,” the policy states. “We consider activities conducted consistent with this policy to constitute “authorized” conduct under the Computer Fraud and Abuse Act.”
Uber, meanwhile, is a good example of a company learning from its mistakes and tightening its safe harbor language, say experts. After its bug bounty scandal emerged tightened its program policies more thoroughly to outline “good-faith vulnerability research and disclosure,” and contain language defining what constitutes unacceptable behavior, stating that the company wants researchers “to hunt for bugs, not user data.”
For instance, one outlined policy makes it clear that Uber won’t take legal action against researchers – as long as they report vulnerabilities with no strings attached. Another clarifies the difference between researchers that act in good faith and people who don’t – so Uber itself won’t get burned by potential bad actors.
A Bright Future
Industry regulations and guidelines may start popping up around vulnerability disclosure policies that can be specifically applied to bug bounty programs. One important guideline was published in 2017, when the Department of Justice published a framework for a Vulnerability Disclosure Program for Online Systems. This includes steps for companies to design, administer and implement an effective vulnerability disclosure program.
Importantly, bug bounty crowdsourcing platforms like HackerOne and Bugcrowd have adopted and supported legal safe harbor terms. And while that doesn’t mean companies working through the HackerOne or Bugcrowd platforms necessarily must also adopt these terms, it provides a good incentive and model for companies just starting a new program to follow.
While few legislative efforts currently exist, one big step was the Prevent Election Hacking Act of 2018, which enables the Department of Homeland Security (DHS) to establish a recurring “Hack the Election” competition in order to prevent hacking at the upcoming 2018 midterm elections.
This act is promising as it adopts safe harbor terms– further jumpstarting the adoption of this. type of langauge, HackerOne’s Mickos said.
Casey Ellis, CTO and founder of Bugcrowd, expects more legislative precedence around safe harbor-friendly bug bounty programs to grow in the coming year, similar to the prevent Election Hacking Act of 2018.
“The rise of vulnerability disclosure as a thing that will get a lot of social and legislative heat behind it in the next 12 to 18 months,” said Ellis.
And in terms of the tech industry itself adopting safe harbor terms, Ellis is also optimistic – especially in what that means for how the hacker community is perceived.
“I think right now it’s the thing you do if you’re a little bit progressive,” he said. “Ultimately, it’s a good thing, it reinforces the point that hackers aren’t necessarily bad people. The folks we work with have the ability to think like criminals but have no intentions to cause harm, like a locksmith.”