Researchers To Debut Botnet-Resistant Coding Techniques

As the botnet epidemic continues to rage, researchers are expanding the scope of their search for new methods to prevent users from becoming unwitting victims of these massive malicious networks. One pair of researchers this week will unveil a new technique they’ve developed to help Web sites protect users whose machines already have been compromised by bots.

As the botnet epidemic continues to rage, researchers are expanding the scope of their search for new methods to prevent users from becoming unwitting victims of these massive malicious networks. One pair of researchers this week will unveil a new technique they’ve developed to help Web sites protect users whose machines already have been compromised by bots.

One of the main problems created by botnets is that many users whose PC have been infected by a bot have no idea it’s happened. In most cases, there are few outward signs that are noticeable to the average user and so the victims go about their normal online business with no clue that their sensitive data is being packaged up and exfiltrated every day. Botnet traffic typically looks like normal port 80 Web traffic and so it’s extremely difficult for victims to identify it and backtrack it to a bot infection.

Most anti-botnet efforts currently focus on finding the network’s command-and-control servers and either sinkholing them or working with the hosting providers and law enforcement to take them offline. But this can be a long and laborious process and it’s typically only effective on a temporary basis as the attackers often have layers of backup C&C servers ready to come online. The limitations of this approach led to Miami-based researchers to develop several methods that Web site operators can implement to limit the effectiveness of botnets’ data extraction methods.

“Security infrastructure has matured and there’s been a lot of focus on that, but application security hasn’t been focused on as much,” said Peter Greko, a security researcher who, along with Fabian Rothschild, will talk about their new techniques at the OWASP AppSec DC conference this week. “A lot of security problems can be addressed in the application. If you go after the C&C, you only take out the bots connected to that server. That’s not an overall problem that can be solved.”

The methods that Greko and Rothschild developed are based on their analysis of the infamous Zeus Trojan and the way that it exfiltrates data and communicates with its C&C servers. The key concept behind their work is that they assume that all PCs are compromised, so their goal is to make whatever data the bot is trying to extract useless. In looking at the Zeus bot, the pair found that the bot uses HTTP POST request logging to gather data from Web sessions on compromised machines. It then sends the data to its remote C&C server via large POST requests, as well. The server on the back end logs the data in a large database, so Greko and Rothschild looked for ways to either prevent the data from reaching the C&C server or to make the data useless once it’s harvested.

One of the modules that Zeus specifically uses to harvest sensitive data injects extra fields into Web site long-in pages on online banking or other high-value sites, asking users for account numbers or Social Security numbers. To inattentive users, these fields look just like legitimate Web form fields, so Greko and Rothschild developed a method that takes advantage of this by injecting extraneous data into form fields that are hidden from the user. This has the effect of filling the C&C database with large amounts of junk data, making it more difficult for the botmaster to search the data store for the really valuable information, such as online banking credentials.

“We can use hidden parameters to bloat the code. We change the name of the parameter in the field and then use concatenation to create the extra data,” Greko said. “We use JavaScript to obfuscate it. The less useful data they have in their database, the more difficult it is to have a credible product to sell.”

The second method they developed uses the above technique, but adds the ability to prepend and append legitimate values with junk data in POST requests. Greko and Rothschild also combine that method with the replacement of certain values in the data with regular expressions in JavaScript. The goal is the same: poisoning the extracted data.

A third, more difficult and more resource-intensive method they developed involves using RC4 encryption with rotating keys to make the data send to the C&C server unreadable by the botmaster. The encryption key is passed to the client in a GET request so that it’s not logged by the Zeus bot. Like the other techniques, this is designed to be implemented on the server side by site operators whose customers send valuable data through their servers: banks, credit card companies, e-commerce sites.

“We’re not trying to ake the bots out, we’re trying to undermine the credibility they have in each other,” said Rothschild, who, like Greko, is affiliated with the HackMiami hacker space. “The underground economy is based on trust and reputation. They can’t check public records on each other. If you get a guy who’s been trustworthy and now he’s selling bad data, you’ll wonder what’s going on.”

Greko said that although there are a number of different Zeus versions in circulation, their methods are effective against most of the known variants.

“This isn’t a black-and-white solution,” Greko said. “We’re just trying to damage the botmaster’s credibility and make it harder for him to find the usable data.”

Suggested articles