Microsoft Gets Second Shot at Banning hiQ from Scraping LinkedIn User Data

Decision throws out previous ruling in favor of hiQ Labs that prevented Microsoft’s business networking platform to forbid the company from harvesting public info from user profiles.

The U.S. Supreme Court has granted LinkedIn another legal option to try to prevent rival hiQ Labs from scraping public information from its user profiles, something the Microsoft-owned professional networking platform has claimed is a violation of user privacy and a misuse of its data.

The court has thrown out a case previously ruled in hiQ Labs’ favor and sent it back down to the lower court for further consideration. The court based its decision on its June 4 ruling in the case Van Buren v. United States that limited the type of conduct that can be prosecuted under the Computer Fraud and Abuse Act of 1986 (CFAA), which is also at the heart of the LinkedIn case.

The decision effectively vacates a 2019 ruling by the San Francisco-based U.S. 9th Circuit Court of Appeals barring LinkedIn from prohibiting hiQ access to publicly available information of LinkedIn’s users, bouncing the case back to the lower court to hear again.

LinkedIn had claimed that hiQ’s actions violate the CFAA, a controversial anti-hacking law which, among other things, prohibits someone from accessing a computer without authorization. The company aimed to use the law to also prevent competitors from harvesting public data provided by users on its website and reusing it for their own gain.

HiQ Labs’ business model is based on using data science to create analytics tools that help companies make decisions regarding how to retain and train employees. To accomplish this , the company scrapes publicly available LinkedIn data to glean insights about employee behavior and skills. HiQ sued LinkedIn in 2017 for anti-competitive behavior after the company made a cease-and-desist effort against HiQ to force it to stop data scraping.

In Van Buren v. United States, the court had limited the scope of the CFAA, ruling that a former police officer could not be found in violation of the law if he accessed information for purposes other than his job while he was at work through a computer that he was authorized to access.

Ruling Reversal or Bot Dilemma?

At the time of the ruling in favor of hiQ, those supporting limits to the scope of the CFAA, such as the Electronic Frontier Foundation, called it a victory that clarified how the law could be used in court. The wording of the CFAA has worried ethical hackers and other industry watchers that it could be interpreted in an over-reaching way to severely limit even well-intentioned internet activity.

Though it is appears that giving LinkedIn another chance to argue its case could be a reversal of opinion about the scope of the CFAA, it could actually be a case of the court making a difference between the act of one single person versus the power of bots that companies like hiQ Labs use to scrape data at a much higher volume than any humans can do, said one expert.

“The U.S. Supreme Court’s decision to allow LinkedIn Corp another chance to try to stop rival hiQ Labs Inc from harvesting personal user data from its platform is a groundbreaking case in the growing and evolving debate surrounding automated bot activity,” observed Edward Roberts, director of strategy for Application Security at Imperva, in an e-mail to Threatpost.

Companies use bots to scrape data because “humans are bad at repetitive tasks,” he said. “If you want to scape millions of online profiles, you have to use a bot,” Roberts said. “If you want to scrape the prices of thousands of items in thousands of online stores, you have to use a bot.”

However, while scraping public data is not necessarily in and of itself a nefarious internet activity, the same technology enabling bots to do it “is also responsible for other automated threats like price scraping, account takeover, fraud, denial of service, and denial of inventory,” he noted.

The court, then, could be allowing the door to open for the CFAA to be used to limit or prohibit to these types of activities when they are malicious in intent, Roberts suggested.

“The bad bot problem is not only growing in volume; it’s also expanding in sophistication,| he observed. “The challenge is: the automation isn’t used exclusively by malicious actors; competitors also use bots to enable the collection of public data for market intelligence.”

Join Threatpost for “Tips and Tactics for Better Threat Hunting” — a LIVE event on Wed., June 30 at 2:00 PM ET in partnership with Palo Alto Networks. Learn from Palo Alto’s Unit 42 experts the best way to hunt down threats and how to use automation to help. Register HERE for free. 

Suggested articles