Large DDoS Attacks Still a Serious Problem

In the world of botnets and denial-of-service attacks, 2009 was a very interesting year. While a handful of large, noisy botnets got most of the attention, there were thousands of serious, prolonged DDoS attacks that not only chewed up huge amounts of bandwidth but likely caused major problems for the targeted organizations.

In the world of botnets and denial-of-service attacks, 2009 was a very interesting year. While a handful of large, noisy botnets got most of the attention, there were thousands of serious, prolonged DDoS attacks that not only chewed up huge amounts of bandwidth but likely caused major problems for the targeted organizations.

The analysts at Arbor Networks recently looked back at the data collected by about 100 of their ISP customers on DDoS attacks in 2009 and found that there were more than 20,000 attacks that peaked above one Gbps of traffic. And there were nearly 3,000 attacks that hit 10 Gbps. That’s a lot of traffic, especially when you consider that “many (most?) enterprises remain connected to the Internet at 1 Gbps or slower speeds,” as Arbor’s Danny McPherson points out.

Today, most enterprises and online properties don’t traditionally factor DDoS attacks in risk planning and management related processes. That is, while they go to great lengths to periodically obtain coveted [err..  necessary] compliance check marks related to data integrity and confidentiality, the third pillar, availability, often takes a backseat. This is perhaps largely driven by auditors with fairly static and quantifiable lists of controls that can be put in place to contain risks associated with traditional vulnerabilities. Unfortunately, lack of foresight and appropriate preparation often leaves folks scurrying about madly when DDoS-related incidents do occur, as they’re not considered until you’ve been hit at least once.

To that point, I suspect it would be safe to assume that the probability of an effectively-sized attack targeting a given Internet property today is higher than the probability of a fire that affects that enterprise’s Internet availability and online presence (something I’ll look to qualify) – whilst from a business continuity perspective the latter is quite likely what the enterprise values most in today’s ‘connected’ world.

McPherson’s point may be a little dramatic, but it’s well-taken. Most reasonably sized organizations have a comprehensive plan for dealing with network outages caused by natural disasters, power failures or an intern tripping over a cord. But many of these same organizations may not have a detailed plan for what to do if they’re targeted by a major DDoS attack. Those tend to fall under the heading of, “why would anyone target us?” or “our ISP will handle it.”

Maybe so. But, as Arbor’s data shows, large DDoS attacks are not the rarity they once were and it’s probably better to know who’s going to do what and when before an attack happens than afterward.

Suggested articles