I engaged in a long twitter conversation with Daniel Kennedy the other
day, and it made me realize that I have little faith in the information security industry right now. The industry does not seem to be evolving as fast
as the threats against information security are.
I engaged with Daniel after he
In the imperfect 140-character world that is twitter
communication, I responded with the following:
The information security industry is constantly changing. Evidence
of this change can be seen each year at the RSA Conference held in San Francisco.
Part educational conference and part trade show, the RSA Conference is the
annual gathering of the information security community. The conference seems to
get bigger every year – I know there have been some down years with fewer
sponsors, attendees, and exhibitions – but for the most part it continues to
grow. The trade show floor is an amazing place – vast and filled with noise and
You can see the latest products making their appearances,
from hard drive shredders (to protect your data when you’re disposing of a hard
drive) to intrusion detection appliances (to inform you if someone is trying to
break into your network). Sales and marketing people try every possible tactic to
get you into a booth, whether your badge says “Press,” “Speaker,” or “Attendee.”
You can play games to win prizes, add your business card to a fishbowl in hopes
of winning an iPad, or chat with “booth babes” paid to flirt with attendees.
The reason for these antics is two-fold. First, as I
mentioned, the show floor is noisy, and you have to be creative to have your
products noticed. Second, and more importantly, the latest products just
aren’t much different from what they were showing last year. Even in the
“Innovation Sandbox” there wasn’t
much that was truly “new thinking.”
I’d like to discuss three examples of how I believe the
information security industry is failing to innovate. These examples focus on anti-malware,
firewalls and web filters, all technologies that many people encounter on a
day-to-day basis. I realize that this may not represent the whole story, and
that it is likely that truly innovative things are happening in the industry.
However, by failing to innovate in these
specific technologies, the industry makes it appear that not much is changing.
Everyone can probably agree that it is important to have an
anti-malware solution to protect systems from viruses, trojans, spyware, and
other bad things. Even on
Apple systems. But it is disappointing that some anti-malware software really hasn’t much beyond the first version of antivirus software I ever installed
(Norton AntiVirus, circa 1992). Anti-malware software generally works by using
“signature” files. These files take a sample of a known virus, figure out how
to identify it, and use that signature to detect if virus code exists in any
file that your system reads or writes.
Signature files are why your anti-malware software needs to
be updated regularly. As new viruses are discovered, new signature files need
to be produced to notice these viruses. The makers of this software started
charging for these updates in the late 1990s, and quickly became very large and
valuable companies. (Intel recently bought McAfee for $7.6 billion, and
Symantec’s annual revenue for 2010 was about $6 billion.)
While makers of anti-malware software are doing some
innovation (adding some capabilities to perform heuristic-based virus
detection), some of them leave these capabilities disabled by default because they are
ineffective, result in false positives, and slow down system performance. Sometimes
the vendors roll out a change or update, and really screw things up. It is in
the best interest of anti-malware vendors to keep you paying for your updates,
and certainly false positives and slowing your system down aren’t going to
encourage you to do that. So, instead of innovating, some AV makers stay conservative and
try to keep you paying for updates. Some vendors have begun rolling out new features like sandboxes and advanced whitelisting, but more innovation is needed.
The idea of a firewall isn’t new. In fact, firewall is a
word that actually meant something
else before computers were interconnected by networks. Firewalls are meant
to stop something from getting in or out. In building construction, a firewall
prevents the spread of fire in or out of an area. With a network firewall, we
are talking about blocking specific kinds of network traffic – either into your
network, or out of your network and on to another one.
In the early days, if you wanted to prevent your users from
using the IRC chat protocol, you would configure your firewall to not allow
traffic over the port most of them use (6667).
This would stop the traffic for a while, but the chat operators would notice
they were being blocked at firewalls, and would reconfigure their IRC server to
use a different port, and suddenly the firewall wasn’t blocking them anymore. Firewalls
have innovated over time to prevent these simple bypasses. With the advent of
stateful and application layer firewall technologies, simply changing the port
of the communication was no longer effective. However, over time, people have found ways past those technologies
Many information security “experts” rightly indicate that a “defense in
depth” strategy is most effective to
prevent attacks. However, many have incorrectly implemented this strategy with
a depth of firewalls. Firewalls are put in place at the desktop, within the
intranet, at the connection point to the Internet, and even in front of
specific applications. Since in recent years there hasn’t been much innovation
of firewall technology, introducing additional complexity to firewalls could
affect their reliability, which would, in turn, make you stop buying them.
Instead, you’re encouraged to implement “defense in depth” and buy even more
Web filtering is like a hybrid of the anti-malware and
firewall technologies mentioned above. Like a firewall, the web filter is
designed to stop something from getting in or out; the web filter’s job is to
keep users from getting to certain websites, lest they be exposed to the
content it provides. Web filters take as input lists of sites that users should
(or should not) be allowed to visit. These lists, like anti-malware signature
files, must be updated constantly as the content available on the web is
Developing a list of all the pornographic, violent, hate,
and/or gambling content on the Internet is a daunting task to say the least;
even Google struggles constantly to index the majority of Internet content and
provide relevant search results. Since Google is one of the 30 or so largest
companies in the world (by market capitalization), I suspect they are probably
better suited to index the entirety of the Internet than most web filtering
vendors are. The whole notion of a web filtering product is that it will block users
from accessing content which is deemed harmful. Web filtering solutions must
determine if a particular site is already known to fit a particular category,
and if not it must be categorized – all in real time, when the user clicks a
link. Web filtering solutions can easily create false positives (for example,
thinking a site for researching pharmaceutical products is related to illicit
drug use) or miss sites that should be filtered (for example a “hate speech”
site). And while this is happening, smart
(or determined) individuals will find ways to
get past the technical hurdles.
When it comes to information security, web filtering is
innovation, but it is innovation of the wrong kind. Instead of finding new and
better ways to educate users about Internet use, web filters instead are being
used as a technical tool to try and protect people from themselves.
Organizations pay for these solutions because it gives them peace of mind,
thinking they have done the right thing by protecting users from an unsafe or
undesirable working environment. Instead, their employees become frustrated
because they cannot do Internet-based research that their job requires.
Organizations feel it is just easier to pay for a web filtering solution and
assume the problem has been solved, than to invest in the employees and truly
As the technologies I’ve described above have shown limited
evolution, the threat landscape has not. Threats have been transforming in a
number of fundamental ways over the last few years. The most significant threat
against a system has traditionally been the lonely teenage computer-geek with
no money but lots of time. Now, the scariest thing you’ll hear about is the
“advanced persistent threat” or APT. The term has been terribly overused,
mostly as a scare tactic to get customers to buy nearly any tool that the
security industry can create. However, when the APT refers to state-sponsored
attacks such as Operation
Aurora, the dynamic changes.
This type of
threat offers potentially unlimited budgets and no fear of prosecution or
retaliation for its creators. After Aurora came Stuxnet,
another likely state-sponsored attack which was even more sophisticated –
leading even to unrecoverable
physical damage to devices, aside from complete compromise of computing
infrastructure. Stuxnet showed that threats are becoming more targeted, identifying
specific infrastructures and systems to attack and destroy, as opposed to bored
teenagers just looking to deface a web
Tools are constantly being developed and updated to simplify the creation of
malware and to hide
malware from virus scanners. In 2009, Symantec saw the presence of over
90,000 variants of ZeuS, one of the bigger malware tool kits. Attacks are
focusing less on machines and networks, and more on applications, which often
provide a larger and less protected attack surface. Tools are being developed to make web application
attacks easier, too. And let’s not forget the growing
threat of information disclosure and data leakage (by insider or by attacker)
and the resultant posting of critical information to sites like Wikileaks.
And what about the information security industry itself? In
the last year disconnects have occurred between what information security
vendors purportedly provide in terms of safety, and their ability to protect
themselves from attack. Specifically, HBGary
Federal and RSA
Security both suffered fairly spectacular break-ins and compromises. Other large organizations such as Sony
suffered breaches affecting millions, with significant downtime.
I’ve described three technologies associated with the
information security industry which people are most likely to encounter in their
day-to-day computer use: anti-malware, firewalls, and web filtering. In each of
these technologies, innovation has occurred over time. And yet, the presence of
threats and the successful execution of all kinds of cybercrime indicate that
the rate of innovation is not fast enough. The attackers are innovating more
quickly than the information security industry is, resulting in more successful
attacks and breaches on a regular basis.
The information security industry is faced with a challenge:
truly innovate or keep the status quo. Financially, there is motivation for the
larger players in the information security industry to keep the status quo. It
is likely going to be up to smaller organizations and startups to provide true
innovation in security tools and technologies.
do not have conclusive proof that this is true. It is possible there are larger
gatherings, but I know of no other gatherings that attract people from all the
different facets of the industry –, salespeople, visionaries, hackers,
luminaries, authors, and customers.
Without going into a lot of detail about how most TCP and UDP networking works,
many client/server protocols have a port which they “listen” or “send” on.
HTTP, which is the protocol used for most web browsing traffic, uses port 80,
for example. More information can be found here.
Peter Hesse is the founder and president of Gemini Security Solutions.