UPDATE: Slammed And Blasted A Decade Ago, Microsoft Got Serious About Security

UPDATE: A decade ago this week, Chairman Bill Gates kicked off the Trustworthy Computing Initiative at Microsoft with a company-wide memo. The echoes of that memo still resonate throughout the software industry today as other firms, from Apple to Adobe, and Oracle to Google have followed the path that Microsoft blazed over the past ten years.

Rob LemosUPDATE: A decade ago this week, Chairman Bill Gates kicked off the Trustworthy Computing Initiative at Microsoft with a company-wide memo. The echoes of that memo still resonate throughout the software industry today as other firms, from Apple to Adobe, and Oracle to Google have followed the path that Microsoft blazed over the past ten years.

But the Trustworthy Computing Initiative, which made terms like secure development lifecycle (SDL), automated patching, and “responsible disclosure” part of the IT community’s common parlance, was no stroke of genius from the visionary Gates. Nor did the plan spring, like Athena, fully formed from the CEO’s forehead. In fact, Trustworthy Computing owes its existence as much to four pieces of virulent malware as it does to Bill Gates’ vision and market savvy. This is the story of how worms drove one of the biggest transformations in the history of the technology industry.

“Not just a marketing problem”

In 2001, there was no Microsoft Security Response Center. The Windows Update service did not exist. Security bulletins were rudimentary, at best, and Windows XP had no default firewall.

For much of the past two years, the most prevalent online threat came in the form of mass-mailing computer viruses that used macros to cull contact information from infected computers. Each infection yielded a bunch of new contacts and the next batch of potential victims. The prominent threats of this generation – mass mailing viruses like Melissa and LoveLetter spawned some security changes from Microsoft. But the changes were iterative – Band Aids on an obvious problem – not efforts at better or more secure product design.

The abrupt arrival of the Code Red worm in June of that year turned conventional thinking about the dangers of Internet borne threats – and how to handle them – on its head. The worm, like many that would come after it, used a software vulnerability in a common Microsoft platform and a slow response to the disclosure of that vulnerability to devastating effect.

In June 2001, Microsoft released an advisory and patch for its Internet Information Server, warning of security vulnerability in how it handled certain requests. Security firm eEye Digital Security had found the vulnerability and warned Microsoft of the issues. Microsoft quickly addressed the problem, but with little impact: customers had neither the tools nor the incentive to patch the flaw, recalls Marc Maiffret, chief technology officer of eEye.

“Microsoft was responsive, but they were trying to figure out how to handle security and to not just keep thinking of these issues as marketing problems,” Maiffret says. 

Less than a month later, Code Red arrived, exploiting that same vulnerability to spread from Web server to Web server. Maiffret and his team analyzed the code and named the worm after the variant of Mountain Dew they had constantly quaffed during the analysis. Nearly a half million servers were infected by the attack, according to estimates at the time. He recalls being surprised by the damage and disruption Code Red caused, both to customers and to the software industry, itself.

“We understood the threat technically, but did not understand the impact it would have on the industry and the security landscape,” says Maiffret.

If Microsoft was not convinced that its products needed a security revamp, the Nimda virus, which started spreading just weeks later, in August 2001, nailed the message home. Nimda was dubbed a “blended” threat, because it used multiple techniques to spread, including by e-mail, open network shares on infected networks, Web pages and via direct attacks on vulnerable IIS installations. Nimda didn’t propagate as quickly as Code Red, but it was difficult to eradicate from affected networks. That meant more and longer support calls for Microsoft and more expensive remediation.

By the end of 2001, Microsoft was feeling the pressure from irate customers and from an increasingly attentive media, which lambasted the company for prioritizing features over underlying security. By the end of the year, the company and its leader realized that it needed to start anew. Gates’ Trustworthy Computing Initiative e-mail would appear just two weeks into the New Year, 2002.

“We stopped writing code.”

On Thursday, January 23, 2003, Tim Rains moved from Microsoft’s network support team and began his first day as part of the company’s incident response group. The engineer did not have much time to acclimate to his new position: Within 48 hours, the Slammer worm hit, compromising hundreds of thousands of servers and inundating Rains’ group with support calls.

The virulent worm spread between systems running Microsoft’s SQL Server as well as applications that used embedded versions of the software, exploiting a flaw that had been patched six months earlier. The threat moved fast, earning the title of the world’s first flash worm: The program — 376 bytes of computer code — spread to 90 percent of all vulnerable servers in the first 10 minutes, according to a report by security researchers and academic computer scientists.

For Microsoft in 2003, Slammer was a reminder that the company still had a long way to go if it wanted to see its nascent Trustworthy Computing effort bear fruit. In the year since Gate’s memo was sent, the software maker had pushed through major changes to its software development process.

Following the Code Red and Nimda worms, Microsoft had changed course: focusing on securing its products and making them easier for customers to secure and created the Strategic Technology Protection Program in October 2001.

“When Code Red hit, I remember helping customers with their IIS servers, and that seemed pretty serious,” Rains says. “But when Blaster hit and Slammer hit, they were orders of magnitude above that.”

Helping users secure the company’s difficult-to-secure products was not enough. Microsoft also had to change an internal development culture that prioritized features over security.

Announcing the Trustworthy Computing Initiative in January, Gates said: “When we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box.”

In the following 12 months, the company halted much of its product development, trained nearly 8,500 developers in secure programming, and then put them to work reviewing Windows code for security errors. The total tally of the effort: About $100 million, according to Microsoft’s estimates.

But Microsoft still needed more time and effort to improve its software. The very same Thursday that Rains began in the security incident response center, Gates sent out a company-wide e-mail celebrating Trustworthy Computing’s first birthday, and highlighting how far the company’s engineers had to go to secure its products.

“While we’ve accomplished a lot in the past year, there is still more to do–at Microsoft and across our industry,” Gates wrote.

The Slammer worm attack just days later was a timely reminder to Microsoft of its failings and a convincing argument for why it had to continue on its costly crusade, in particular in cajoling its massive customer base to apply the security fixes that it issued.

SQL Slammer was based on proof of concept code privately disclosed to the company by UK security researcher David Litchfield months before, and quickly patched by the company. A demo of an exploit for the hole at the Black Hat Briefings in the summer of 2002 also raised the profile of the SQL vulnerability, but to no avail: few SQL Server users had applied the company’s patch by the time January rolled around (Litchfield estimated fewer than 1 in 10 had been patched prior to the release of Slammer). Once the SQL Slammer worm began jumping from SQL Server installation to SQL server installation, circling the globe in just minutes, there was little time to patch.

Slammer, like its predecessors, forced still more radical changes in Microsoft’s corporate culture and procedures. Development of Yukon (SQL Server 2005) was put on hold and the company’s entire SQL team went back over codebases from Yukon back to SQL Server 2000 to look for flaws. As Litchfield wrote in a Threatpost editorial: the effort, though costly, paid dividends:

“The first major flaw to be found in SQL Server 2005 came over 3 years after its release… So far SQL Server 2008 has had zero issues. Not bad at all for a company long considered the whipping boy of the security world.”

Slammer also prompted big changes in the area of patch and update distribution. Microsoft simplified its update infrastructure and made efforts to improve patches, and embarked on a number of information sharing efforts with the security community.

“A turning point”

The MSBlast or Blaster worm, started spreading in August, 2003. Within days, Rains and the security team were buried under an avalanche of support calls. Microsoft halted its regular work and conscripted much of the company’s programming staff to help respond to the threat.

“It really stands out how Microsoft mobilized,” Rains says. “We stopped writing code, and programmers came over to call centers that we had. I remember being in large rooms and training people to help customers.”

The worm took advantage of a vulnerability in Windows XP remote procedure call (RPC) functionality. Security professionals at the time called the most widespread flaw ever. In its first few months, the worm infected about 10 million PCs, according to Microsoft data. Eighteen months later, the software giant had updated the figure to more than 25 million.

“It was the turning point for us,” says Microsoft’s Rains. “We had already started getting serious because of SQL Slammer, but Blaster was the one that really galvanized the entire company.”

Two months after the Blaster worm started spreading, Microsoft changed the focus of its second service pack for Windows XP, targeting the entire update on improving the security of users’ systems. In addition, the company kicked off a campaign to educate users and created its bounty program for information leading to the arrest of the perpetrators behind Blaster and the Sobig virus.

While the changes were painful, the results have been overall positive, say security professionals.

“Sadly, the only time when technology companies do things to improve security is when they have enough black eyes,” says eEye Maiffret. “That’s what happened with Microsoft.”

Other companies and their products are now undergoing the same scrutiny by attackers. Hopefully, they will learn the same lessons.

CORRECTION: Microsoft’s Tim Rains’ recollection of training rooms of programmers and testers did not occur after the Slammer attack, but following the spread of Blaster. We have moved his statement regarding the training to the appropriate section

Suggested articles