The Pitfalls of Website Vulnerability Research and Disclosure

By Chris WysopalVulnerability disclosure is in the spotlight again. First it was Tavis Ormandy disclosing a vulnerability in Microsoft Windows before Microsoft had a fix available. Now a group called Goatse Security has disclosed a vulnerability in an AT&T website that affects Apple iPad 3G owners. The Wall Street Journal reports on the repercussions against vulnerability researchers in “Computer Experts Face Backlash”.

Vulnerability disclosure is in the spotlight again. First it was Tavis Ormandy disclosing a vulnerability in Microsoft Windows before Microsoft had a fix available. Now a group called Goatse Security has disclosed a vulnerability in an AT&T website that affects Apple iPad 3G owners. The Wall Street Journal reports on the repercussions against vulnerability researchers in “Computer Experts Face Backlash”.

The AT&T website vulnerability is part of a growing new trend for vulnerability disclosures. As software and services move from traditional installed software to SaaS and into the cloud, more vulnerabilities are only going to exist in code running on one organization’s web server. This makes the basis for website vulnerability disclosures as beneficial somewhat different from disclosures for software that is installed on many customer devices.

The first issue with vulnerabilities in code running on a website is, to do the research in the first place, the researcher needs to interact with computers that they don’t own. Traditional vulnerability research occurs on the researcher’s equipment or on equipment they have permission to use. Website research has a risk of crossing the line into unauthorized access or exceeding authorized access as defined by the CFAA (Computer Fraud and Abuse Act).

What constitutes exceeding access on a public website is a bit of a gray area. On one hand, sending a large buffer to a web application that causes it to crash and execute the code of your choosing seems like exceeding authorized access. No one would ever think the application was designed to do that and clearly executing your own program is very different than interacting with a web page. But what about a web site which was designed to display the email address associated with an ID when the user enters an ID? Is it exceeding authorized access to put in a random ID and get the email address associated with it back? The website is working as its designers intended.

The latter case is exactly the vulnerability (now fixed) in the AT&T website that affected iPad 3G users. Anyone who registered on the AT&T website entered their iPad’s ICC-ID and an email address. After they had registered they could return and enter just the ICC-ID and the web page would display their email address. Researchers from Goatse Security noticed this and tried entering random ICC-ID numbers into the website and discovered for valid ICC-IDs they would get the owner’s email in response.

At this point Goatse Security had enough to demonstrate the vulnerability and report it to AT&T. But as is often the case when a tiny organization with little track record is reporting an issue to a huge multinational company, they gathered enough information to make the story newsworthy and got a 3rd party organization to contact the company. In fact, they harvested 114,067 email addresses. So a wrinkle to this “gray area” of exceeding authorized access may how much information is gathered. If AT&T prosecutes, as they have stated they will, we will get to find out whether this behavior exceeded authorized access in the eyes of the court.

There is clearly a benefit to Goatse Security’s work. AT&T had the opportunity to fix their website before any information about the vulnerability was made public. A vulnerability that disclosed information that could have been used by criminals to target iPad owners, both over email and over the GSM network, has been remediated. Furthermore, the iPad owners have been notified and can take corrective action, such as being more vigilant to iPad targeted attacks over email or changing their ICC-ID with a new SIM card. It is hard to see any downside to their actions. They never disclosed the information they obtained to prove the vulnerability to a 3rd party and they say they have destroyed it.

We need a way for researchers that discover vulnerabilities in web applications and report them without being prosecuted. As long as the owners of the web site have the opportunity to make corrections to address the vulnerability before disclosure, this will benefit users in the long run.

The challenge is in determining what is an attack and what is research? When does research become exceeding unauthorized access under CFAA? These questions don’t exist for research into vulnerabilities in traditional software that is installed on a machine the researcher owns. As sensitive information moves from local machines and servers to databases and files on the internet, this information is mediated by potentially vulnerable web applications. If good faith and responsible research can’t continue to follow software as it moves from desktops and servers to the cloud then data security overall will suffer.

[block:block=47]

But we shouldn’t kid ourselves and think that research alone can make an application more secure. It can point out bugs here and there, but can never make an application secure. To do that, web app developers need to test their software for security vulnerabilities before they deploy the software to the internet. A vulnerability report from a researcher is a wake-up call that security testing was inadequate. Organizations need to demonstrate to their customers that they have conducted adequate testing before they deploy their applications and certainly before they attract the attention of researchers. That is the real solution for security on the web. Unfortunately we are still in a phase where researchers need to keep demonstrating the need for more security testing.

Chris Wysopal is the co-founder and CTO of Veracode.

Suggested articles

Broken IBM Java Patch Prompts Another Disclosure

Current versions of IBM SDK 7 and SDK 8 remain vulnerable to a 2013 Java vulnerability. Security Explorations discovered the original patch is broken and disclosed details on the flaw and a proof-of-concept exploit.