InfoSec Insider

Squashing Ants: The Dynamics of XSS Remediation

By Chris EngIs anyone else getting tired of hearing excuses from customers — and worse yet, the security community itself — about how hard
it is to fix cross-site scripting (XSS) vulnerabilities? Oh, come on.
Fixing XSS is like squashing ants, but some would have you believe it’s
more like slaying dragons. I haven’t felt inspired to write a blog
post in a while, but every once in a while, 140 characters just isn’t
enough. Grab your cup of coffee, because I may get a little rambly.

Is anyone else getting tired of hearing excuses from customers — and worse yet, the security community itself — about how hard

it is to fix cross-site scripting (XSS) vulnerabilities? Oh, come on.
Fixing XSS is like squashing ants, but some would have you believe it’s
more like slaying dragons. I haven’t felt inspired to write a blog
post in a while, but every once in a while, 140 characters just isn’t
enough. Grab your cup of coffee, because I may get a little rambly.

Easy to Fix vs. Easy to Eradicate

Let’s start with some terminology to make sure we’re all on the same
page. Sometimes people will say XSS is “not easy to fix” but what they
really mean is that it’s “not easy to eradicate.” Big difference,
right? Not many vulnerability classes are easy to eradicate. Take
buffer overflows as an example. Buffer overflows were first documented
in the early 1970s and began to be exploited heavily in the 1990s. We
understand exactly how and why they occur, yet they are far from
extinct. Working to eradicate an entire vulnerability class is a noble
endeavor, but it’s not remotely pragmatic for businesses to wait around
for it to happen. We can bite off chunks through OS, API, and framework
protections, but XSS or any other vulnerability class isn’t going to
disappear completely any time soon. So in the meantime, let’s focus on
the “easy to fix” angle because that’s the problem developers and
businesses are struggling with today.

It’s my belief that most XSS vulnerabilities can be fixed easily.
Granted, it’s not as trivial as wrapping a single encoding mechanism
around any user-supplied input used to construct web content, but once
you learn how to apply contextual encoding,
it’s really not that bad, provided you grok the functionality of your
own web application. An alarming chunk of reflected XSS vulnerabilities
are trivial, reading the value of a GET/POST parameter and writing it
directly to an HTML page. Plenty of others are only marginally more
complicated, such as retrieving a user-influenced value from the
database and writing it into an HTML attribute. I contend both of these
examples are easy for a developer to fix; tell me if you disagree.
Basic XSS vulnerabilities like these are still very prevalent.

Of course, there are edge cases. Take this freakish example,
which combines browser-specific parsing behavior with the ill-advised
use of tainted input in Javascript code. Exceptions will always exist,
but that doesn’t change the fact that most XSS flaws
are straightforward to fix. We can take a huge bite out of the problem
by eliminating these basic reflected cases, just like we started
attacking buffer overflows by discouraging the use of unbounded string
manipulation functions. Some will claim “developers shouldn’t be
responsible for writing secure code,” which is noble and idealistic but
also completely impractical in this day and age. Maybe it’ll happen
eventually, but in the meantime there are fires to put out. So let’s
step down from those ivory towers and impose some accountability.

Ease of Fix vs. Willingness to Fix

I’ve heard the assertion that XSS vulnerabilities aren’t getting
fixed because they are difficult to fix. Asking “what percentage of XSS
vulnerabilities actually get fixed and deployed to production?” is a
valuable metric for the business, but it doesn’t reflect the actual
difficulty of fixing an XSS vulnerability. It conflates the technical
complexity with other excusesreasons why website vulnerabilities are not fixed.

At Veracode, we collected data in our State of Software Security Vol. 2 report that reveals developers are
capable of fixing security issues quickly. While our data isn’t
granular enough to state exactly how long it took to fix a particular
flaw, we do know that in cases where developers did choose to remediate
flaws and rescan, they reached an “acceptable” level of security in an
average of 16 days. This isn’t to say that every XSS was eliminated,
but it suggests that most were (more details on our scoring methodology
can be found in the appendix of the report).

WhiteHat’s Fall 2010 study
shows that nearly half of XSS vulnerabilities are fixed, and that doing
so takes their customers an average of 67 days. These numbers differ
from ours — particularly with regard to the number of days — but I think
that can be attributed to prioritization. Perhaps fixing the XSS
vulnerability didn’t rise to the top of the queue until day 66. Again,
that’s more an indication that the business isn’t taking XSS seriously
than it is of the technical sophistication required to fix.

At Veracode, we see thousands — sometimes tens of thousands — of XSS
vulnerabilities a week. Many are of the previously described trivial
variety that can be fixed with a single line of code. Some of our
customers upload a new build the following day; others never do.
Motivation is clearly a factor. Think about the XSS vulnerabilities
that hit highly visible websites such as Facebook, Twitter, MySpace, and
others. Sometimes those companies push XSS fixes to production in a
matter of hours! Are their developers really that much better? Of
course not. The difference is how seriously the business takes it.
When they believe it’s important, you can bet it gets fixed.

Manufactured Contempt

There’s a growing faction that believes security practitioners are not qualified to comment
on the difficulty of security fixes (XSS or otherwise) because we’re
not the ones writing the code. The ironic thing is that this position
is most loudly voiced by people in the infosec community! It’s like
they are trying to be the “white knights”, coddling the poor, fragile
developers so their feelings aren’t hurt. Who are we to speak for them?
I find the entire mindset misguided at best, disingenous and
contemptuous at worst. To be fair, Dinis isn’t the only one who has
expressed this view, he’s just the straw that broke the camel’s back, so
to speak. You know who you are.

Look, the vast majority of security professionals aren’t developers
and never have been (notable exceptions include Christien Rioux, HD
Moore, Halvar Flake, etc.). Trust me, we know it. I’ve written lots of
code that I’d be horrified for any real developer to see. My stuff may
be secure, but I’d hate to be the guy who has to maintain, extend, or
even understand it. Here’s the thing — even though I can guarantee you
I’d be terrible as a developer, most XSS flaws are so simple that even a
security practioner like me could fix them! Here’s another way of
looking at it: developers solve problems on a daily basis that are much more complex than patching an XSS vulnerability. Implying that fixing XSS is “too hard” for them is insulting!

That being said, who says we’re not qualified to comment on a
code-level vulnerability if we’re not the one writing the fix? In fact,
who’s to say that the security professional isn’t more
qualified to assess the difficulty in some situations? Specifically,
if a developer doesn’t understand the root cause, how can he possibly
estimate the effort to fix? I’ve been on readouts where developers
claim initially that several hundred XSS flaws will take a day each to
fix, but then once they understand how simple it is they realize they
can knock them all out in a week. Communication and education go a long
way. Sure, sometimes there are complicating factors involved that
affect remediation time, but I can’t recall a time where a developer has
told me my estimate was downright unreasonable.

Bottom line: By and large, I don’t think developers feel miffed or
resentful when we try to estimate the effort to fix a vulnerability.
They know that what we say isn’t the final word, it’s simply one input
into a more complex equation. Yes, developers do get annoyed when it
seems like the security group is creating extra work for them, but
that’s a different discussion altogether.

Ceteris Paribus

One final pet peeve of mine is the rationalization that security
vulnerabilities take longer to fix because you have to identify the root
cause, account for side effects, test the fix, and roll it into a
either a release or a patch. As opposed to other software bugs where
fixes are accomplished by handwaving and magic incantations? Of course
not; these steps are common to just about any software bug. In
fact, I’d argue that identifying the root cause of a security
vulnerability is much easier than hunting down an unpredictable crash, a
race condition, or any other non-trivial bug. Come to think of it,
testing the fix may be easier too, at least compared to a bug that’s
intermittent or hard to reproduce. As for side effects and other QA
testing, this is why we have regression suites! If you build software
and you don’t have the capability to run an automated regression suite
after fixing a bug, then let’s face it, you’ve got bigger problems than
wringing out a few XSS vulnerabilities.

My high school economics teacher used the term “ceteris paribus” at
least once per lecture. Loosely translated from Latin, it means “all
other things being equal” and it’s often used in economics and
philosophy to enable one to describe outcomes without having to account
for other complicating factors. The ceteris paribus concept doesn’t
apply perfectly to this situation, but it’s close enough for a blog
post, to wit: ceteris paribus, fixing a security-related bug is no more
difficult than fixing any other critical software bug. Rattling off all the steps involved in deploying a fix is just an attempt at misdirection.

Closing Thoughts

My hope in writing this post is to spur some debate around some of
the reasons, excuses, and rationalizations that often accompany the
surprisingly-divisive topic of XSS. I want to hear from both security
practitioners and developers on where you think I’ve hit or missed the
mark. We don’t censor comments here, but there is a moderation queue, so
bear with us if your comment takes a few hours to show up.

Chris Eng is the senior director of security research at Veracode. This essay originally appeared on Veracode’s ZeroDay Labs blog.

Suggested articles