The Vulnerability Disclosure Process: Still Broken

Despite the advent to bug bounty programs and enlightened vendors, researchers still complain of abuse, threats and lawsuits.

Despite huge progress in the vulnerability disclosure process, things remain broken when it comes to vendor-researcher relationships.

Case in point: Last year when Leigh-Anne Galloway (a cybersecurity resilience lead at Positive Technologies) found a gaping hole in the Myspace website, she reported it to Myspace owner Time Inc. But then days, weeks and then three months later – crickets.

The Myspace bug wasn’t small. It allowed a hacker to log in to any one of the 3.6 million Myspace active users’ accounts in a few easy steps. “It was a straightforward bug, and easy to execute and reproduce,” Galloway told Threatpost.

After giving up on Time Inc., Galloway weighed the public risk of the bug versus going public. Galloway decided to publish her research. “Within hours of my blog posting, the bug was fixed,” she said. Neither Time Inc. or Myspace ever got back to her.

A year later, things haven’t improved much: Last month, Galloway found several bugs in mobile point-of-sale platforms. After privately disclosing the bugs to the vendors, they didn’t ignore her, but she was threatened with multiple lawsuits for reverse-engineering copyright-protected intellectual property.

“I can’t say personally I’m seeing a lot changing,” she said.

Vulnerability disclosure has long been the third rail in the relationship between researcher and vendor. While bug-bounty programs have been a step in the right direction, friction still exists for a meaningful percentage of vendors and researchers.

“The relationship between vulnerability researcher and vendor in the context of disclosure is broken,” said Casey Ellis, chairman, founder and CTO of bug-bounty platform Bugcrowd. “If you look at the entire ecosystem of companies and researchers – especially outside the scope of a bounty program – it still needs to be fixed.”

Experts say that murky non-disclosure agreements and unclear safe-harbor rules contribute to the problem; as do companies deathly afraid their bugs will become public. Another issue is opportunistic researchers eschewing responsible disclosure in favor of selling vulnerabilities to the highest bidder.

The diffuser of those tensions has been the rise of programs such as HackerOne, Bugcrowd and over a dozen others that have commoditized the communication, workflow and prices of vulnerability research. Threat intelligence and analysis firm EclecticIQ for instance estimates bounty programs keep about 80 to 90 percent of vulnerability disclosure relationships with vendors on an even keel.

“Of the 80,000 bugs found and fixed, we have avoided legal action on all of them,” said Marten Mickos, CEO of HackerOne. “There is work to be done, but bounty programs have eliminated most of the friction between researchers and vendors,” he said.

Despite the progress being made with bug-bounty projects, there’s still plenty of room for things to go awry. And recent examples are plentiful.

For instance, Microsoft was recently put in the hot seat when a Twitter user going by the handle @SandboxEscaper expressed exasperation in Microsoft’s bug-submission process and publicly disclosed a zero-day flaw.

Other examples include Google’s Project Zero, which was recently accused of playing politics by Epic Games when it disclosed a bug tied to the Android version of company’s popular Fortnite game. And, after the makers of the cryptocurrency wallet Bitfi declared their product “unhackable” — and offered $250,000 to anyone who could compromise it – the wallet (of course) was cracked. No bounty was paid and the company has rescinded its bounty offer.

Meanwhile, a 2017 HackerOne study found that 94 percent of the Forbes Global 2000 do not have official vulnerability policies at all.

While experts disagree on the depth of the problem, Katie Moussouris, founder and CEO of Luta Security, said when it comes to bounty programs, ambiguity often leads to problems between researchers and vendors.

“People are confusing bug-bounty programs with legitimate penetration testing contracts,” she said. “The difference is that pen-testers are hired to find vulnerabilities within a company and are paid – and are protected legally – whether they find a bug or not.”

Bounty hunters, on the other hand, are involved in “competition” for uncovering new flaws, and are only paid when they find a vulnerability. They’re not always shielded legally, and have to adhere to binding non-disclosure agreements tied to the bugs that they find, in accordance with the bug-bounty programs they are involved with.

“Only the first person to find a bug gets the bounty money,” Moussouris explained. “The vendor can also say they already found the bug and they are not going to pay you. Now, the researcher is still stuck with that non-disclosure agreement. And then what happens if the vendor decides they are never going to fix that bug?”

She added, “When that happens, that’s an abuse and a perversion of the bug bounties.”

Moussouris, a tireless advocate for improving vendor/researcher relationships, said bounty programs have accelerated change for the better, but bug-bounty programs have failed to solve big issues. Aside from issues around bug non-disclosure rules, protecting hackers with safe-harbor provisions has had only middling success.

Safe-harbor frameworks shield researchers from legal action if a vendor threatens a white-hat hacker with a lawsuit on the grounds of compromising its technology.

“The current U.S. main federal anti-hacking laws, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act, along with notable public incidents, have had a chilling effect on the security researcher community,” wrote Amit Elazari, a University of California at Berkeley doctoral candidate, in a post on Bugcrowd’s website. “The ambiguity of existing laws and lack of frameworks surrounding protocols for ‘good-faith’ security testing has sometimes resulted in legal implications for ethical hackers working to improve global security.”

Bugcrowd proposed a vendor-agnostic project to standardize best practices around safe harbor, called Disclose.io, with the goal to push forward an Open Source Vulnerability Disclosure Framework that the industry can rally around.

There are also copious specific vulnerability and disclosure frameworks. They include those from U.S. CERTISOIETFNIST and the DOJ, and also guidelines offered by the FTC. Almost all of the guidelines underscore the same theme – setting clear expectations and defining disclosure processes.

The DOJ efforts focus on “reducing the likelihood that [white-hat hacking] will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act.” And the ISO 29147 and 30111 attempt to standardize disclosure and patching processes.

However, experts concede safe-harbor rules and vulnerability disclosure frameworks can only go so far to address underlying issues if they are only implemented in piecemeal or incorrectly.

“I’m an advocate of frameworks and safe-harbor agreements,” Moussouris said. “But, not when they are used like Botox. You can’t support a framework and then just hope to smooth out the wrinkles that are in your underlying process.”

Outside the context of bounty programs, valuable, high-severity bugs represent another shifting landscape, according to Joep Gommers, founder and CEO of EclecticIQ.

“Bounty programs are great for finding low-hanging fruit and medium-severity bugs,” he said, adding that the niche private market for buying sophisticated vulnerabilities and exploit chains is booming.

“When a security researcher finds a bug that can infect 100 million Windows computers, that’s not going to make it onto a bug-bounty program,” Gommers said. He explained that those bugs are often uncovered by  firms such as Crowdfense and Zerodium, which act as exploit brokers to the private market.

Crowdfense is currently offering between $500,000 to $3 million for zero-day bugs as well as partial exploit chains. Zerodium is advertising that it will pay up to $300,000 for a Windows zero-day bug, or $1.5 million for an iPhone zero-day.

Customers of these firms are confidential, but have been suspected to be governments, or companies like the NSO Group that develop spy tools such as Pegasus. Other likely buyers are APT groups and state-sponsored hackers.

“It’s easier to sell to bad actors, because good actors simply don’t have the capital anymore,” Gommers said.

He added that ironically, the rise of vendor-sponsored authorized hacking platforms has made this darker aspect of vulnerability hunting more difficult to track.

“Bounty programs and the commercial market are driving the few people who do really bad stuff further into the dark,” he said.

While issues around vulnerability hunting and disclosure permeate the landscape, unique challenges persist for certain segments, including the internet of things (IoT).

“When it comes to IoT vulnerabilities, compared to reporting on Windows issues it’s much more grave due to several reasons,” said Ankit Anubhav a researcher at NewSky Security. “First, there is a complete lack of standards as each IoT vendor has its own set of rules on how to deal with disclosure.”

IoT, to a larger degree than other areas of technology, wrestles with supply-chain related issues. The challenge being, the weakest security link in an expansive ecosystem can be a tiny component, layers deep. Anubhav and others concede that penetration testing on those systems can involve multiple vendors, across multiple countries and legal jurisdictions.

“We need stronger laws [to protect researchers], but vendors need rules around auto-update features, timelines on fixing bugs and disclosure, and third-party oversight,” he said.

At this year’s Black Hat cybersecurity conference, there seemed to be a hint of hope: A potential change in attitude when it comes to addressing the onslaught of bug discoveries that keep the hamster wheel of disclosure and patching spinning.

During her keynote, Parisa Tabriz, director of engineering at Google, said that focusing on isolated security fixes — a traditional approach — should no longer be the most important problem to solve. Instead, she said, the goal needs to be understanding the root causes of bugs and mitigating against those.

“The industry is still struggling,” Moussouris said. “I think the way forward is to look internally. Yes, bugs need to be patched, but we also need to focus on securing-by-design and secure development lifecycles, and other internal processes.”

She noted that over the past several years, Microsoft’s bug-bounty program has yet to see a reduction in volume in the number of bugs reported. “I think we’re due for another growth spurt within our industry that changes how we approach vulnerabilities,” Moussouris said.

There’s a lot on the line, said Matt Chiodi, vice president and CISO at RedLock. “As long as researchers don’t have clear disclosure guidelines to follow and vendors don’t have clear response strategies, hackers are going to have more dwell time to take advantage of unpatched vulnerabilities,” he said.

When RedLock found a critical misconfiguration vulnerability in a cloud service it notified 37 affected companies of the problem. Only 10 percent of those contacted responded within a day. Thirty percent responded within seven days, 40 percent beyond a week  and 20 percent never responded at all. “With that batting average, we need to be doing a better job as an industry,” Chiodi said.

 

Suggested articles