Facebook Removed Tens of Thousands of Apps Post-Cambridge Analytica

Facebook said it has suspended and banned tens of thousands of apps on its platform after its investigation, launched after Cambridge Analytica, into how they collect and use data.

Facebook said it has suspended tens of thousands of apps as part of its ongoing investigation into how third-party apps on its platform collect, handle and utilize users’ personal data. The results of the investigation, launched in March 2018 in response to Facebook’s infamous Cambridge Analytica incident, sharpen the spotlight on the social media platform’s data collection policies.

The tens of thousands of apps, associated with about 400 developers, have either been suspended or banned completely. Facebook said that several of the apps in question have inappropriately shared data obtained from its platform, made data publicly available without protecting people’s identity or something else “that was in clear violation of our policies.”

“We have not confirmed other instances of misuse to date other than those we have already notified the public about, but our investigation is not yet complete. We have been in touch with regulators and policymakers on these issues,” Ime Archibong, VP of product partnerships at Facebook, said in a Friday post. “We’ll continue working with them as our investigation continues.”

Facebook said that it initially identified apps for the investigation based on how many users they had and how much data they could access. It also identifies apps based on “signals” associated with an app’s potential to abuse our policies (Facebook did not expand on what those signals are. Threatpost has reached out for further clarification).

“Where we have concerns, we conduct a more intensive examination,” said Facebook. “This includes a background investigation of the developer and a technical analysis of the app’s activity on the platform. Depending on the results, a range of actions could be taken from requiring developers to submit to in-depth questioning, to conducting inspections or banning an app from the platform.”

Facebook also stressed that the suspension of apps did not necessarily mean that they were posing an actual privacy threat to users. Many apps for instance were not live but were still in their testing phase when they were suspended.

“It is not unusual for developers to have multiple test apps that never get rolled out,” said Facebook. “And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.”

One such app that was banned was called myPersonality. This app, which was mainly active prior to 2012 and allowed its users to participate in psychological research, shared information with researchers and companies with only limited protections in place, and then refused our request to participate in an audit. Facebook has notified the roughly 4 million impacted users that the app had misused information collected from them.

Facebook said it has also taken legal action toward various developers who did not comply with privacy standards or investigations.

For instance, the company revealed that in May, it had filed a lawsuit in California against  South Korean data analytics company Rankwave for failing to cooperate with its investigation. And, it filed action against LionMobi and JediMobi, two companies that used their apps to infect users’ phones with malware in a profit-generating scheme.

“This lawsuit is one of the first of its kind against this practice,” said Facebook. “We detected the fraud, stopped the abuse and refunded advertisers. In another case, we sued two Ukrainian men, Gleb Sluchevsky and Andrey Gorbachov, for using quiz apps to scrape users’ data off our platform.”

The investigation came the heels of Cambridge Analytica, when Facebook promised it would review all apps on its platform policies that had managed to access large amounts of data before 2014, when Facebook made a change that limited how much data they had access to.

Moving forward, Facebook on Friday said that its new agreement with the Federal Trade Commission (on the heels of the FTC’s investigation into, and subsequent penalty against, the social media platform) will bring a new set of requirements for bringing oversight to app developers. For instance, developers must annually certify compliance with Facebook’s policies.

“We won’t catch everything, and some of what we do catch will be with help from others outside Facebook,” said Facebook. “Our goal is to bring problems to light so we can address them quickly, stay ahead of bad actors and make sure that people can continue to enjoy engaging social experiences on Facebook while knowing their data will remain safe.”

Interested in the role of artificial intelligence in cybersecurity, for both offense and defense? Don’t miss our free Threatpost webinar, AI and Cybersecurity: Tools, Strategy and Advice, with senior editor Tara Seals and a panel of experts. Click here to register.

Suggested articles