Be Ready: Next Internet Bug Won’t Be The Last

Panelists at the Advanced Cyber Security Center annual conference discuss how readiness for the next Internet-scale bug is no longer a luxury.

BOSTON – Heartbleed, and the rash of Internet-wide bugs that will ultimately define security in 2014, tested the resilience of enterprises worldwide. In turn, resilience has been elevated as a major talking point for companies evaluating their preparedness for the inevitable next Heartbleed-type event in 2015 and beyond.

Today at the Advanced Cyber Security Center annual conference here, experts well versed in enterprise security policy making and enforcement tackled the topic of preparedness for the unknown, or Left-of-Boom. Heartbleed, as it turned out, wasn’t the only stop-the-presses moment for security managers this year. Shellshock and POODLE exposed weaknesses not only in open source and ubiquitously deployed software libraries, but also made it painfully clear in many cases that security organizations no longer have the luxury of simply chasing and cleaning up after the last event. The next one is right around the corner.

Experts such as Andy Ellis, chief security officer at Akamai, were thrown headfirst—by the Heartbleed vulnerability in OpenSSL at first—into not only rapid response, but also patching, mitigation, cleanup, and communication with the public and customers. It was a lesson in incident response, but also a real-world exercise testing the resilience of a security organization and enterprise, and the vitality of existing processes.

“I had to deal with six Internet vulnerabilities that are break-the-Internet scale things. We weren’t prepared for six of them. I didn’t walk into the year saying ‘I’m ready to deal with six,'” said Ellis, who took part in a panel with State Street CIO Chris Perretta, HackerOne chief policy officer Katie Moussouris and CyberArk CEO Udi Mokady. “The first one happened and it was a little bit painful. We said that maybe we could tweak and change things a little bit. That’s being left of boom: learn from your past don’t relive the past.”

Shellshock came on Heartbleed’s heels, and while it was a bug in a Linux shell command and not in a crypto library, it did share one important trait: it too was widely exploited within hours. But given what Akamai had learned and tweaked during Heartbleed, there was at least the start of a blueprint for dealing with another global vulnerability.

“What can you learn from the past that is sort of normalized to deal with any kind of incident so you’re ready for the future–and then test that,” Ellis explained. “You make an assumption, and then Shellshock comes along and well, ‘I was totally wrong on what the future might bring and let’s adjust again.’ That process of iterating and adjusting is important.”

Perretta, sitting in the CIO seat, is the direct link between technologists and executive management at his organization, and boiling down incidents, internal weak points and potential consequences is his job.

“We have to determine a standard of care, present it to management, and it then becomes a commercial decision,” he said. “Technicians can lay out the risk and the opportunities, but at the end of the day it’s pretty straightforward that this is the standard of care, and not to do it is negligence.”

Moussoris, meanwhile, brought an industry perspective to the discussion having been responsible for writing Microsoft’s coordinated vulnerability disclosure policy and running that program during her seven years with the company. Her view on preparedness and resilience involves embracing security researchers and building means by which they can disclose vulnerability information to affected vendors without harm.

“Where I came from, there was this giant untapped herd of resources in the world: they are hackers. These folks are the canaries in the tunnel, and if organizations are open to hearing them versus prosecuting them at first sight, they can be sentinels,” Moussouris said. “Most think they are in it for profit and are not altruistic, but working at Microsoft, I was able to see human behavior patterns in them that most wanted to do the right thing. Being left of boom means being prepared to receive notification from a friendly hacker who wants to help before something bad happens.”

Moussouris pointed out that the Computer Fraud and Abuse Act in the U.S. gives many legitimate researchers pause.

“If you’ve got an online service and if a hacker finds something, technically, they’ve broken that law,” she said. “There needs to be a better way for organizations to define what they want from the outside. You can’t incentivize hackers to stay quiet, but then encourage them to join a team of defenders.”

 

Suggested articles