Phony Googlebots Becoming a Real DDoS Attack Tool

Phony Googlebots are being used with greater frequency to launch and carry out application-layer denial-of-service attacks.

Even an enterprise with the harshest, strictest blocking rules in place is likely to leave the door ajar for Google’s search bot software known as a Googlebot.

Googlebots crawl websites collecting data along the way in order to build a searchable index that assures a site will be listed and ranked on the search engine.

Hackers have taken notice of the access afforded to these crawlers and are using spoofed Googlebots to launch application-layer distributed denial of service attacks with greater frequency.

Research released today from web security firm Incapsula identifies this as a growing trend among attackers; for every 25 Googlebot visits, companies are likely to visited by a fake one. Almost a quarter of those phony Googlebots are used in DDoS attacks, elevating it to the third most popular DDoS bot in circulation, according to product evangelist Igal Zeifman.

Zeifman said Incapsula is able to identify Googlebot imposters because Google crawlers come from a pre-determined IP address range. All of the fake ones are considered malicious, and have been used to initiate site scraping, spam and hacking in addition to DDoS.

Zeifman said attackers’ success with this approach is due to a combination of two things.

“One is the assumption you can have indiscriminate protection. Even if you provide harsh blocking rules, and say block all traffic from x country, you still leave some way for Google to get in because you want to appear on Google ,” Zeifman said. “Hackers are looking for a loophole. The more advanced [mitigation] tools are able to identify Googlebots, which is done by a cross-verification of IP addresses. But this also shows a low level of understanding by hackers of how modern DDoS protection works. They assume you can’t do IP cross verification.”

While network-layer DDoS attacks have reached ridiculous proportions with volumes of traffic built on amplification attacks reaching upwards of 400 Gb per second of bad traffic, application-layer attacks don’t require nearly the same level of noise. Attackers can scout out a website’s resources and pinpoint attacks, for example, to continually request a download for a particular form hosted on a site, or make requests of other resource-heavy pages. Website designers tend to over-provision for the number of website visitors per second or minute they anticipate, and that is rarely an outrageous number. Therefore it’s simple for an attacker to send more fake Googlebots at a resource than a page can handle.

“You don’t have to create a big flood to generate 5,000 visits per second,” Zeifman said. “It’s easy to generate 5,000 per second. Layer 7 attacks are more common for sure than Layer 3 or 4 events. The reason is that it’s easier to execute and more dangerous, even in low volumes.”

There have been attacks, Zeifman said, where hackers have used both network-layer and application-layer DDoS tactics simultaneously. Some of those attacks, he said, have also figured out how to beat DDoS mitigations in place for the application layer that require the user to activate a JavaScript object in order to weed out browsers from bots.

“Can you execute the JavaScript? If not, then you are a bot posing as a browser,” Zeifman said. “They’ve figured out how to attack these resources to remove this tool from our arsenal.”

Suggested articles