InfoSec Insider

Sharing Threat Intelligence: Time for an Overhaul

threat intelligence information sharing

All too often, information-sharing is limited to vertical market silos; to build better defenses, it’s time to take a broader view beyond the ISAC.

Most organizations don’t really have a good way of sharing threat-related data outside of their own industry verticals. Sure, there are Information Sharing and Analysis Centers (ISACs); i.e. FS-ISACs for the financial-services industry. But the information still tends to stay in industry-specific silos.

In this article I’ll be talking about some new ideas for broadening how threat intelligence is shared, and how to make it more useful.

It’s just another Tuesday morning in New York City and a security analyst at a major financial services firm sifts through intrusion alerts, in hopes of detecting the next wave of attacks from an unknown adversary that’s been pummeling the firm over the past three months. This is her sole focus. The attacks have gotten worse. The techniques have gotten more advanced. Her job is in the spotlight and she’s hoping that everything she’s learned about the adversaries’ tactics, techniques and procedures (TTPs) will help inform her defense when the next wave strikes.

Meanwhile, across the country in Los Angeles, an incident-response team at another major firm completes a report on how the latest cyberattack it suffered took place, what the bad guys got and the series of events that got them there. This company is a fast-growing gaming company, and they have invested millions into hiring the best and brightest security professionals. It shares one thing in common with the financial services firm in New York: The exact same bad guys have attacked both organizations.

Understanding precisely what happened as each of the companies’ defenses failed can be just as informative as if they had stopped the attack outright. But unfortunately, the gaming organization hasn’t shared meaningful information about the adversary with the financial services firm in New York, so they can’t benefit from comparing notes.

This is an all-too-common scenario. As defenders looking to meet the vision of making the internet more secure in a measurable way, it’s important that we find ways to get more people into ISAC sharing organizations — and to get those sharing organizations talking to each other.

We need to give organizations of all sizes the ability to share threat-intelligence data and apply strategic controls for addressing those threats in a manner that is measurable and repeatable.

This can be done with cloud-based security controls that can be applied uniformly across these organizations, with the goal of cooperatively sharing in the gathering of threat-related indicators of compromise (IoCs) and attacker TTPs. With this new grouping of organizations, we would be able to apply controls in a way that is consistent across verticals, environments and geographical regions.

Within the current vertical-focused model, we would need to create new classes or levels of risk for delivering threat intelligence. These wouldn’t be based on the type of exploit or the CVE severity of a given vulnerability; instead, they would be based on the ubiquity of the current attack campaign, the velocity of its growth, or how novel the nature of the exploit technique is, as observed across a large swath of internet-based organizations.

This would allow for more granular risk identification, based on how likely a threat is to affect a specific organization. It would also allow for a more accurate measurement of what is actually being exploited, and what threats are active and growing; this aids in priority-setting for emergency mitigation.

When it comes to combating DDoS and other web-based attacks, global visibility is required for more accurate identification of threats, which would require a different sort of cooperation, amongst those who provide internet networks.

Source: Akamai. Click to enlarge.

To achieve more granular and actionable inspection of malicious traffic, it’s possible to use a reverse proxy architecture geographically deployed close to end-user connections. As traffic passes through the internet, certain devices and network paths have different viewpoints on what’s taking place on the web. If we could gather threat-related IoCs by looking at the application layer of a web session (instead of just watching IP traffic talk from one network hop to the next), we might find that behind a “top-talking” IP address there are actually thousands of individual web sessions taking place.

With a reverse proxy architecture, we can identify client web sessions by surveying the session-related context of internet traffic and more closely apply controls. This would especially be helpful for web application or API-level threats. And, if we could implement inspection points for malicious traffic at locations very close to where end user (and malicious) traffic originate, we’d have the ability to watch for abusive volumetric traffic before it accumulates into a DDoS flood.

Also, using high level Border Gateway Protocol (BGP)-routed inspection points, we would be able to implement signature related rule sets to carry out the basic block-and-tackle of scripted rent-o-bot-based attacks.

Although this might seem like a pipe dream, there are several vendor organizations that are in prime position to pull this off. These include large global ISPs, global cloud services providers and the large content delivery networking (CDN) platforms.

Using their scale and influential relationship with customers, these internet specialists could band together to develop a new structure around sharing this valuable threat intelligence data – and more importantly, actually be able to provide a consumable service around each of these areas.

By developing a uniformed way to share and respond to threats at a global scale, maybe we can knock down the barriers between vertical markets (i.e., those hypothetical companies in New York and Los Angeles) and change the way we think about sharing threat intelligence.

(Tony Lauro manages the Enterprise Security Architecture team at Akamai Technologies. With over 20 years of information security industry experience, Tony has worked and consulted in many verticals including finance, automotive, medical/healthcare, enterprise, and mobile applications. He is currently responsible for Akamai‘s North America clients as well as the training of an Akamai internal group whose focus is on Web Application Security and adversarial resiliency disciplines. Tony‘s previous responsibilities include consulting with public sector/government clients at Akamai, managing security operations for a mobile payments company, and overseeing security and compliance responsibilities for a global financial software services organization.)


Suggested articles