Facebook is dealing with another controversy related to the 2016 US presidential election, and the fallout could have implications for how the social network treats data and the community of software developers who have access to it.
On Saturday, the New York Times reported that a firm called Cambridge Analytica, which worked with the Donald Trump campaign, harvested information from 50 million Facebook accounts without their permission.
How Cambridge Analytica got that data is where it gets complicated. It allegedly leads back to a Cambridge professor named Aleksandr Kogan, who created an app called “thisisyourdigitallife,” a personality quiz that was billed as “a research app used by psychologists.”
He legitimately gained access to information on 270,000 accounts through Facebook’s Login feature, which lets people use their Facebook account to log into outside apps so they don’t have to create new user names and passwords. At the time Kogan allegedly collected the information, Facebook allowed developers to also access information not only from the people who opted into the feature, but some other data about their network of friends. That added up to info from 50 million accounts, according to the New York Times. (Facebook changed its rules three years ago to stop developers from seeing information about people’s friends.)
So far, Kogan’s data-gathering was all above board and in compliance with Facebook’s rules.
But in 2015, Facebook learned that Kogan had passed on that data to Cambridge Analytica, Paul Grewal, Facebook VP and deputy general counsel, said in a blog post. When Facebook found out, the social network demanded Cambridge Analytica and Kogan destroy the data. Facebook said it received certifications they had complied.
Now, there are allegations not all the data had been destroyed.
In reaction to the controversy, Facebook’s most senior executives have been trying to stress that Facebook was deceived. Kogan shouldn’t have passed on the data, they said, and those involved shouldn’t have lied about deleting all of it.
“[Kogan] lied to those users and he lied to Facebook about what he was using the data for,” Facebook’s chief security officer Alex Stamos wrote in a now-deleted tweet.
But the dustup raises several questions about Facebook’s responsibility in all of this. Facebook’s trove of data on its 2 billion monthly users is a powerful tool. It helps millions of people connect with one another, while also helping advertisers send targeted ads based on what people put in their profiles.
To what extent is Facebook responsible for bad actors who abuse developer agreements? Could Facebook have done more to ensure the data had been destroyed — beyond accepting certifications it was deleted? How can the social network try to make sure other developers don’t pass on the data they collect to other parties?
Facebook didn’t respond to a request for comment asking the company these questions.
Still, critics put the blame squarely on the social network. “Facebook leaders think we are questioning their motives and sincerity. Hardly,” tweeted Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia. “We are calling them out on their policies, actions, and their consequences.”
For Facebook’s part, Grewal said in his blog post that Facebook has made changes to how it deals with developers in the last five years. For example, the need to undergo an app review that requires them to justify collecting certain types of date.
A broader problem
What happened with Cambridge Analytica is just the latest in a growing list of ways Facebook’s platform has been reportedly misused.
Malicious actors from propagandists to trolls have been abusing its systems for sometimes nefarious ends. The social network is still under fire for Russian trolls abusing Facebook with ads and organic posts to sow discord among Americans and influence the 2016 election.
Because of that, Facebook has drawn ire from lawmakers threatening to impose regulations on the social network and its advertising business. Last November, Facebook, along with Twitter and Google, sent their general counsels to hearings on Capitol Hill for a.
The Cambridge Analytica incident has also raised the specter of regulation. “We continue to keep an eye on the louder chatter coming out of regulators and politicians around Facebook’s business model being used for improper means post the Russian meddling situation,” Daniel Ives, an analyst at GBH Insights, wrote in a note to investors Sunday.
Indeed, Congress has been quick to criticize Facebook. But this time lawmakers want answers straight from the top.
“It’s clear these platforms can’t police themselves,” Amy Klobuchar, the Democrat senator from Minnesota, tweeted on Saturday night. “They say ‘trust us.’ Mark Zuckerberg needs to testify before Senate Judiciary.”
But perhaps the most concerning part is that the Russians — and now Cambridge Analytica — didn’t abuse Facebook by breaching its security. There were no passwords stolen, no systems hacked. Instead, they used Facebook’s products exactly as they were built (though in the case of Kogan and Cambridge Analytica, the rule-breaking allegedly came later).
Stamos and Andrew Bosworth, another top exec at Facebook who used to run its ads business, have been quick to point out the difference between data breach and service violation. Still, Bosworth acknowledged Facebook’s responsibility to protect people from “predatory behavior.”
“The distinction you’re drawing between this infraction and a data breach is meaningful from your perspective, but not from ours,” a Twitter user named Evan Baily replied to a Bosworth tweet. “Facebook’s platform must protect us from predatory behavior or we can’t and shouldn’t trust the platform.”
In response, Bosworth quoted the tweet, and added, “I agree with this.”
Updated, 3:48 p.m. PT: Adds comment from analyst note about regulation.
iHate: CNET looks at how intolerance is taking over the internet.
Tech Enabled: CNET chronicles tech’s role in providing new kinds of accessibility.