Tag Archives: The Guardian

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Facebook ignored staff warnings about “sketchy” Cambridge Analytica in September 2015

Facebook employees tried to alert the company about the activity of Cambridge Analytica as early as September 2015, per the SEC’s complaint against the company which was published yesterday.

This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a UK parliamentary committee last year.

Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.

In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply”.

Screenshot 2019 07 25 at 11.43.17

Amid a flurry of major headlines for the company yesterday, including a $5BN FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100M to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.

This tidbit was slipped out towards the end of a lengthy blog post by Facebook general counsel Colin Stretch which focused on responding to the FTC order with promises to turn over a new leaf on privacy.

CEO Mark Zuckerberg also made no mention of the SEC settlement in his own Facebook note about what he dubbed a “historic fine”.

As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘get out of jail’ card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.

‘Historic fine’ is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.

And by nesting its disclosure of the SEC settlement inside effusive privacy-washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal which highlight ongoing inconsistencies and contradictions to put it politely.

The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.

Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service”, and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015”, adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”

Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — which, as Facebook has sought to pain it, is the ‘rogue’ developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.

This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.

Fast forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.

The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?

Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)

At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.

Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).

Screenshot 2019 07 25 at 12.02.10

What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.

More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it an “historic fine”.

We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”

We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company his company was set up to sell Facebook data to; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for ‘quantitative researcher’ on LinkedIn’s platform returns more than 177,000 results of professional who are using the descriptor in their profiles.)

Earlier this month a UK parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.

The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.

A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.

Under questioning in Congress last year, Facebook founder Zuckerberg also personally told congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.

Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.

There are more awkward details in the SEC complaint that Facebook seems keen to bury too — including that as part of a signed settlement agreement, GSR’s other co-founder Aleksandr Kogan told it in June 2016 that he had, in addition to transferring modelled personality profile data on 30M Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.

This US Facebook user data included personal information such as names, location, birthdays, gender and a sub-set of page likes.

Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…

Screenshot 2019 07 25 at 12.40.29

The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.

The UK’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.

The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.

Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.

“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.

The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.

“Despite Facebook’s suspicions about Cambridge and the red flags raised after the Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.

UK parliament’s call for Zuckerberg to testify goes next level

The UK parliament has issued an impressive ultimatum to Facebook in a last-ditch attempt to get Mark Zuckerberg to take its questions: Come and give evidence voluntarily or next time you fly to the UK you’ll get a formal summons to appear.

“Following reports that he will be giving evidence to the European Parliament in May, we would like Mr Zuckerberg to come to London during his European trip. We would like the session here to place by 24 May,” the committee writes in its latest letter to the company, signed by its chair, Conservative MP Damian Collins.

“It is worth noting that, while Mr Zuckerberg does not normally come under the jurisdiction of the UK Parliament, he will do so the next time he enters the country,” he adds. “We hope that he will respond positively to our request, but if not the Committee will resolve to issue a formal summons for him to appear when he is next in the UK.”

Facebook has repeatedly ignored the DCMS committee‘s requests that its CEO and founder appear before it — preferring to send various minions to answer questions related to its enquiry into online disinformation and the role of social media in politics and democracy.

The most recent Zuckerberg alternative to appear before it was also the most senior: Facebook’s CTO, Mike Schroepfer, who claimed he had personally volunteered to make the trip to London to give evidence.

However for all Schroepfer’s sweating toil to try to stand in for the company’s chief exec, his answers failed to impress UK parliamentarians. And immediately following the hearing the committee issued a press release repeating their call for Zuckerberg to testify, noting that Schroepfer had failed to provide adequate answers to as many of 40 of its questions.

Schroepfer did sit through around five hours of grilling on a wide range of topics with the Cambridge Analytica data misuse scandal front and center — the story having morphed into a major global scandal for the company after fresh revelations were published by the Guardian in March (although the newspaper actually published its first story about Facebook data misuse by the company all the way back in December 2015) — though in last week’s hearing Schroepfer frequently fell back on claiming he didn’t know the answer and would have to “follow up”.

Yet the committee has been asking Facebook for straight answers for months. So you can see why it’s really mad now.

We reached out to Facebook to ask whether its CEO will now agree to personally testify in front of the committee by May 24, per its request, but the company declined to provide a public statement on the issue.

A company spokesperson did say it would be following up with the committee to answer any outstanding questions it had after Schroepfer’s session.

It’s fair to say Facebook has handled this issue exceptionally badly — leaving Collins to express public frustration about the lack of co-operation when, for example, he had asked it for help and information related to the UK’s Brexit referendum — turning what could have been a fairly easy to manage process into a major media circus-cum-PR nightmare.

Last week Schroepfer was on the sharp end of lots of awkward questions from visibly outraged committee members, with Collins pointing to what he dubbed a “pattern of behavior” by Facebook that he said suggested an “unwillingness to engage, and a desire to hold onto information and not disclose it”.

Committee members also interrogated Schroepfer about why another Facebook employee who appeared before it in February had not disclosed an existing agreement between Facebook and Cambridge Analytica .

“I remain to be convinced that your company has integrity,” he was told bluntly at one point during the hearing.

If Zuckerberg does agree to testify he’ll be in for an even bumpier ride. And, well, if he doesn’t it looks pretty clear the Facebook CEO won’t be making any personal trips to the UK for a while.

Cambridge Analytica has been shut out of Twitter’s ad platform too

It has emerged that Cambridge Analytica, the political consultancy firm at the center of a data misuse storm involving Facebook user data, has also been banned from advertising on Twitter’s platform.

Facebook suspended the company’s account in March after fresh revelations were published about how user data had been passed to the company by a developer on its platform — although the Guardian newspaper originally linked the firm to Facebook data in a story published in December 2015.

A Twitter spokesperson confirmed to us what the company describes as a “policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica on advertising”, adding the decision was taken “weeks” ago.

“This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules,” the company spokesperson added.

The move is unrelated to reports yesterday that Twitter had sold public user data to Dr Aleksandr Kogan — the Cambridge University academic who sold Facebook data to Cambridge Analytica in 2014, after harvesting it via an app that drew on Facebook’s APIs to pull information on users and their friends.

Last month Kogan told a UK parliamentary committee he had subsequently used some of the money Cambridge Analytica had paid him for gathering and processing the Facebook data to buy some Twitter data, though he said he had intended to use that for his own purposes, not for selling to others.

On this, Twitter’s spokesperson also told us: “Based on the recent reports, we conducted our own internal review and did not find any access to any private data about people who use Twitter.  Unlike many other services, Twitter is public by its nature. People come to Twitter to speak publicly, and public Tweets are viewable and searchable by anyone. In 2015, GSR [Kogan’s comapny] did have one-time API access to a random sample of public Tweets from a five-month period from December 2014 to April 2015.”

Cambridge Analytica has also denied undertaking a project with Kogan’s company that used Twitter data.

Although the company has also continued to deny it received Facebook data — despite the existence of a 2014 contract between the company and Kogan to gather data; and despite Kogan’s own insistences that his app harvested Facebook user data.

Facebook has also said as many as 87 million users could have had some of their information harvested by Kogan and passed to Cambridge Analytica.

In a blog post late last month Twitter reiterated some of the policies it has in place to limit access to public Twitter data — even when a developer is paying for it, as Kogan was.

“We prohibit developers from inferring or deriving sensitive information like race or political affiliation, or attempts to match a user’s Twitter information with other personal identifiers in unexpected ways,” it wrote, flagging the Restricted Uses page for more info on types of behaviors it said are not tolerated, and adding: “Developers who are found to be in violation of our policies are subject to enforcement actions, including immediate termination.”

Despite barring Cambridge Analytica from running ads on its platform, Twitter has not suspended the company’s verified Twitter account — which the company continues to use to tweet denials related to the Facebook data misuse scandal.

Cambridge Analytica’s ex-CEO backs out of giving evidence to UK parliament

Alexander Nix, the former CEO of the political consultancy firm at the center of a storm about mishandled Facebook users data, has backed out of re-appearing in front of the UK parliament for a second time.

Nix had been scheduled to take questions from the DCMS committee that’s probing online misinformation tomorrow afternoon.

In a press notice today, the committee said: “The former CEO of Cambridge Analytica, Alexander Nix, is now refusing to appear before the Digital, Culture, Media and Sport Committee at a public session tomorrow, Wednesday 18th April, at 2.15pm. He cites the Information Commissioner’s Office’s ongoing investigation as a reason not to appear.”

Nix has already given evidence to the committee — in February — but last month it recalled him, saying it has fresh questions for him in light of revelations that millions of Facebook users had their data passed to CA in violation of Facebook’s policies.

It has also said it’s keen to press him on some of his previous answers, as a result of evidence it has heard since — including detailed testimony from CA whistleblower Chris Wylie late last month.

In a statement today about Nix’s refusal to appear, committee chair Damian Collins said it might issue a formal summons.

“We do not accept Mr Nix’s reason for not appearing in a public session before the Committee. We have taken advice and he is not been charged with any criminal offence and there is no active legal proceedings and we plan to raise this with the Information Commissioner when we meet her this week. There is therefore no legal reason why Mr Nix cannot appear,” he said.

“The Committee is minded to issue a formal summons for him to appear on a named day in the very near future. We’ll make a further statement about this next week.”

When Nix attending the hearing on February 27 he claimed Cambridge Analytica does not “work with Facebook data”, also telling the committee: “We do not have Facebook data”, though he said the company uses the social media platform to advertise, and also “as a means to gather data, adding: “We roll out surveys on Facebook that the public can engage with if they elect to.”

Since then Facebook has said information on as many as 87 million users of its platform could have been passed to CA, via a quiz app that was able to exploit its friends API to pull data on Facebook users’ friends.

The Facebook CEO, Mark Zuckerberg, has also been asked to give evidence to the committee — but has declined repeat requests to appear.

Today the committee heard from a former CA director, Brittany Kaiser, who suggested CA had in fact been able to obtain information on far more than 87M Facebook users — by the use of a series of additional quiz apps designed to be deployed on Facebook’s platform.

She claimed viral tactics were used to harvest Facebookers’ data, naming two additional survey apps it had deployed on Facebook’s platform as a ‘sex compass’ app and a music quiz app claiming to determine your personality. She said she believed the point of the quizzes was to harvest Facebook user data.

Facebook finally suspended Cambridge Analytica from its platform last month — although the company has admitted it was made aware of the allegations linking it with a quiz app that harvested Facebook users data since at least December 2015, when the Guardian published its first article on the story.

Last month the UK’s data protection agency obtained a warrant to enter and search the offices of Cambridge Analytica — as part of an ongoing investigation into the use of data analytics for political purposes which it kicked off in May 2017.

The information commissioner said the warrant had been necessary as CA failed to meet an earlier deadline to hand over information that it had requested.

Meanwhile Nix himself was suspended as CEO by CA last month, following a Channel 4 News investigation broadcast video footage of Nix talking to an undercover reporter and appearing to suggest the firm uses a range of dubious tactics, including front companies and subcontractors to secretly engage in political campaigns.

In a statement at the time, CA said the secretly recorded comments — and “other allegations” — “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation”.

It’s since been reported that Julian Wheatland, the chair of the company’s UK counterpart, SCL Group, will be taking over as CA CEO — though this has not yet been publicly confirmed. Though it has said that the acting CEO, Dr Alexander Taylor, who took over from Nix last month has returned to his former role as chief data officer.

Cambridge University hits back at Zuckerberg’s shade

Facebook’s CEO Mark Zuckerberg’s testimony to the House yesterday was a mostly bland performance, punctuated by frequent claims not to know or remember certain fundamental aspects of his own business. But he gave a curiously specific and aggressive response to a question from congressman Eliot Engel.

Starting from the premise that Facebook had been “deceived” by other players in the data misuse scandal it’s embroiled in, the congressman wondered whether Facebook intends to sue Cambridge Analytica, professor Aleksandr Kogan and Cambridge University — perhaps for unauthorized access to computer networks or breach of contract?

“It’s something that we’re looking into,” replied Zuckerberg. “We already took action by banning [Kogan] from the platform and we’re going to be doing a full audit to make sure he gets rid of all the data that he has as well.”

But the Facebook founder also seized on the opportunity to indulge in a little suggestive shade throwing which looked very much like an attempt to blame-shift responsibility for the massive data scandal embroiling his company onto, of all things, one of the UK’s most prestigious universities. (Which, full disclosure, is my own alma mater.)

“To your point about Cambridge University what we’ve found now is that there’s a whole program associated with Cambridge University where a number of researchers — not just Aleksandr Kogan, although to our current knowledge he’s the only one who sold the data to Cambridge Analytica — there are a number of the researchers who are building similar apps,” said Zuckerberg.

“So we do need to understand whether there is something bad going on at Cambridge University overall that will require a stronger action from us.”

What’s curious about this response is that Zuckerberg elides to mention how Facebook’s own staff have worked with the program he’s suggesting his company “found now” — as if it had only discovered the existence of the Cambridge University Psychometrics Centre, whose researchers have in fact been working with Facebook data since at least 2007, since the Cambridge Analytica story snowballed into a major public scandal last month.

A Facebook data-related project that the center is involved with, called the myPersonality Project — which started as a student side project of the now deputy director of the Psychometrics Centre, David Stillwell — was essentially the accidental inspiration for Kogan’s thisismydigitallife quiz app, according to testimony given to the UK parliament by former Cambridge Analytica employee Chris Wylie last month.

Here’s how the project is described on the Centre’s website:

myPersonality was a popular Facebook application that allowed users to take real psychometric tests, and allowed us to record (with consent!) their psychological and Facebook profiles. Currently, our database contains more than 6,000,000 test results, together with more than 4,000,000 individual Facebook profiles. Our respondents come from various age groups, backgrounds, and cultures. They are highly motivated to answer honestly and carefully, as the only gratification that they receive for their participation is feedback on their results.

The center itself has been active within Cambridge University since 2005, conducting research, teaching and product development in pure and applied psychological assessment — and claiming to have seen “significant growth in the past twelve years as a consequence of the explosion of activity in online communication and social networks”. 

And while it’s of course possible that Zuckerberg and his staff might not have been aware of the myPersonality Facebook app project — after all 4M Facebook profiles harvested is rather less than the up to 87M Kogan was able to extract, also apparently without Facebook noticing — what’s rather harder for Zuckerberg to deny knowledge of is the fact his company’s own staff have worked with Cambridge University researchers on projects analyzing Facebook data for psychological profiling purposes for years. Since at least 2015.

In a statement provided to TechCrunch yesterday, the University expressed surprise at Zuckerberg’s remarks to the house.

“We would be surprised if Mr Zuckerberg was only now aware of research at the University of Cambridge looking at what an individual’s Facebook data says about them,” a spokesperson told us. “Our researchers have been publishing such research since 2013 in major peer-reviewed scientific journals, and these studies have been reported widely in international media. These have included one study in 2015 led by Dr Aleksandr Spectre (Kogan) and co-authored by two Facebook employees.”

The two Facebook employees who worked alongside Kogan (who was using the surname Spectre at the time) on that 2015 study — which looked at international friendships as a class marker by examining Facebook users’ friend networks — are named in the paper as Charles Gronin and Pete Fleming.

It’s not clear whether Gronin still works for Facebook. But a LinkedIn search suggests Fleming is now head of research for Facebook-owned Instagram.

We’ve asked Facebook to confirm whether the two researchers are still on its payroll and will update this story with any response.

In its statement, Cambridge University also said it’s still waiting for Facebook to provide it with evidence regarding Kogan’s activities. “We wrote to Facebook on 21 March to ask it to provide evidence to support its allegations about Dr Kogan. We have yet to receive a response,” it told us.

For his part Kogan has maintained he did nothing illegal — telling the Guardian last month that he’s being used as a scapegoat by Facebook.

We’ve asked Facebook to confirm what steps it’s taken so far to investigate Kogan’s actions regarding the Cambridge Analytica misuse of Facebook data — and will update this story with any response.

During his testimony to the House yesterday Zuckerberg was asked by congressman Mike Doyle when exactly Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article.

In his testimony to the UK parliament last month, Wylie suggested Facebook might have known about the app as early as July 2014 because he said Kogan had told him he’d been in touch with some Facebook engineers to try to resolve problems with the rate that data could be pulled off the platform by his app.

But giving a “yes” response to Doyle, Zuckerberg reiterated Facebook’s claim that the company first learned about the issue at the end of 2015, when the Guardian broke the story.

At another point during this week’s testimony Zuckerberg was also asked whether any Facebook staff had worked alongside Cambridge Analytica when they were embedded with the Trump campaign in 2016. On that he responded that he didn’t know.

Yet another curious aspect to this story is that Facebook hired the co-director of GSR, the company Kogan set up to license data to Cambridge Analytica — as the Guardian reported last month.

According to its report Joseph Chancellor was hired by Facebook, around November 2015, about two months after he had left GSR — citing his LinkedIn profile (which has since been deleted).

Chancellor remains listed as an employee at Facebook research, working on human computer interaction & UX, where his biography confirms he also used to be a researcher at the University of Cambridge…

I am a quantitative social psychologist on the User Experience Research team at Facebook. Before joining Facebook, I was a postdoctoral researcher at the University of Cambridge, and I received my Ph.D. in social and personality psychology from the University of California, Riverside. My research examines happiness, emotions, social influences, and positive character traits.

We’ve asked Facebook when exactly it hired Chancellor; for what purposes; and whether it had any concerns about employing someone who had worked for a company that had misused its own users’ data.

At the time of writing the company had not responded to these questions either.

How Facebook has reacted since the data misuse scandal broke

Facebook founder Mark Zuckerberg will be questioned by US lawmakers today about the “use and abuse of data” — following weeks of breaking news about a data misuse scandal dating back to 2014.

The Guardian published its first story linking Cambridge Analytica and Facebook user data in December 2015. The newspaper reported that the Ted Cruz campaign had paid UK academics to gather psychological profiles about the US electorate using “a massive pool of mainly unwitting US Facebook users built with an online survey”.

Post-publication, Facebook released just a few words to the newspaper — claiming it was “carefully investigating this situation”.

Yet more than a year passed with Facebook seemingly doing nothing to limit third party access to user data nor to offer more transparent signposting on how its platform could be — and was being — used for political campaigns.

Through 2015 Facebook had actually been ramping up its internal focus on elections as a revenue generating opportunity — growing the headcount of staff working directly with politicians to encourage them to use its platform and tools for campaigning. So it can hardly claim it wasn’t aware of the value of user data for political targeting.

Yet in November 2016 Zuckerberg publicly rubbished the idea that fake news spread via Facebook could influence political views — calling it a “pretty crazy idea”. This at the same time as Facebook the company was embedding its own staff with political campaigns to help them spread election messages.

Another company was also involved in the political ad targeting business. In 2016 Cambridge Analytica signed a contract with the Trump campaign. According to former employee Chris Wylie — who last month supplied documentary evidence to the UK parliament — it licensed Facebook users data for this purpose.

The data was acquired and processed by Cambridge University professor Aleksandr Kogan whose personality quiz app, running on Facebook’s platform in 2014, was able to harvest personal data on tens of millions of users (a subset of which Kogan turned into psychological profiles for CA to use for targeting political messaging at US voters).

Cambridge Analytica has claimed it only licensed data on no more than 30M Facebook users — and has also claimed it didn’t actually use any of the data for the Trump campaign.

But this month Facebook confirmed that data on as many as 87M users was pulled via Kogan’s app.

What’s curious is that since March 17, 2018 — when the Guardian and New York Times published fresh revelations about the Cambridge Analytica scandal, estimating that around 50M Facebook users could have been affected — Facebook has released a steady stream of statements and updates, including committing to a raft of changes to tighten app permissions and privacy controls on its platform.

The timing of this deluge is not accidental. Facebook itself admits that many of the changes it’s announced since mid March were already in train — long planned compliance measures to respond to an incoming update to the European Union’s data protection framework, the GDPR.

If GDPR has a silver lining for Facebook — and a privacy regime which finally has teeth that can bite is not something you’d imagine the company would welcome — it’s that it can spin steps it’s having to make to comply with EU regulations as an alacritous and fine-grained response to a US political data scandal and try to generate  the impression it’s hyper sensitive to (now highly politicized) data privacy concerns.

Reader, the truth is far less glamorous. GDPR has been in the works for years and — like the Guardian’s original Cambridge Analytica scoop — its final text also arrived in December 2015.

On the GDPR prep front, in 2016 — during Facebook’s Cambridge Analytica ‘quiet period’ — the company itself told us it had assembled “the largest cross functional team” in the history of its family of companies to support compliance.

Facebook and Zuckerberg really has EU regulators to thank for forcing it to do so much of the groundwork now underpinning its response to this its largest ever data scandal.

Below is a quick timeline of how Facebook has reacted since mid March — when the story morphed into a major public scandal…

March 16, 2018: Just before the Guardian and New York Times publish fresh revelations about the Cambridge Analytica scandal, Facebook quietly drops the news that it has finally suspended CA/SCL. Why it didn’t do this years earlier remains a key question

March 17: In an update on the CA suspension Facebook makes a big show of rejecting the notion that any user data was ‘breached’. “People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked,” it writes

March 19: Facebook says it has hired digital forensics firm Stroz Friedberg to perform an audit on the political consulting and marketing firm Cambridge Analytica. It subsequently confirms its investigators have left the company’s UK offices at the request of the national data watchdog which is running its own investigation into use of data analytics for political purposes. The UK’s information commissioner publicly warns the company its staff could compromise her investigation

March 21: Zuckerberg announces further measures relating to the scandal — including a historical audit, saying apps and developers that do not agree to a “thorough audit” will be banned, and committing to tell all users whose data was misused. “We will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well,” he writes on Facebook.

He also says developers’ access to user data will be removed if people haven’t used the app in three months. And says Facebook will also reduce the data users give to an app when they sign in — to just “your name, profile photo, and email address”.

Facebook will also require developers to not only get approval but also “sign a contract in order to ask anyone for access to their posts or other private data”, he says.

Another change he announces in the post: Facebook will start showing users a tool at the top of the News Feed “to make sure you understand which apps you’ve allowed to access your data” and with “an easy way to revoke those apps’ permissions to your data”.

He concedes that while Facebook already had a tool to do this in its privacy settings people may not have seen or known that it existed.

These sorts of changes are very likely related to GDPR compliance.

Another change the company announces on this day is that it will expand its bug bounty program to enable people to report misuse of data.

It confirms that some of the changes it’s announced were already in the works as a result of the EU’s GDPR privacy framework — but adds: “This week’s events have accelerated our efforts”

March 25: Facebook apologizes for the data scandal with a full page ad in newspapers in the US and UK

March 28: Facebook announces changes to privacy settings to make them easier to find and use. It also says terms of services changes aimed at improving transparency are on the way — also all likely to be related to GDPR compliance

March 29: Facebook says it will close down a 2013 feature called Partner Categories — ending the background linking of its user data holdings with third party data held by major data brokers. Also very likely related to GDPR compliance

At the same time, in an update on parallel measures it’s taking to fight election interference, Facebook says it will launch a public archive in the summer showing “all ads that ran with a political label”. It specifies this will show the ad creative itself; how much money was spent on each ad; the number of impressions it received; and the demographic information about the audience reached. Ads will be displayed in the archive for four years after they ran

April 1: Facebook confirms to us that it is working on a certification tool that requires marketers using its Custom Audience ad targeting platform to guarantee email addresses were rightfully attained and users consented to their data being used them for marketing purposes — apparently attempting to tighten up its ad targeting system (again, GDPR is the likely driver for that)

April 3: Facebook releases the bulk app deletion tool Zuckerberg trailed as coming in the wake of the scandal — though this still doesn’t give users a select all option, but it makes the process a lot less tedious than it was.

It also announces culling a swathe of IRA Russian troll farm pages and accounts on Facebook and Instagram. It adds that it will be updating its help center tool “in the next few weeks” to enable people to check whether they liked or followed one of these pages. It’s not clear whether it will also proactively push notifications to affected users

April 4: Facebook outs a rewrite of its T&Cs — again, likely a compliance measure to try to meet GDPR’s transparency requirements — making it clearer to users what information it collects and why. It doesn’t say why it took almost 15 years to come up with a plain English explainer of the user data it collects

April 4: Buried in an update on a range of measures to reduce data access on its platform — such as deleting Messenger users’ call and SMS metadata after a year, rather than retaining it — Facebook reveals it has disabled a search and account recovery tool after “malicious actors” abused the feature — warning that “most” Facebook users will have had their public info scraped by unknown entities.

The company also reveals a breakdown of the top ten countries affected by the Cambridge Analytica data leakage, and subsequently reveals 2.7M of the affected users are EU citizens

April 6: Facebook says it will require admins of popular pages and advertisers buying political or “issue” ads on “debated topics of national legislative importance” like education or abortion to verify their identity and location — in an effort to fight disinformation on its platform. Those that refuse, are found to be fraudulent or are trying to influence foreign elections will have their Pages prevented from posting to the News Feed or their ads blocked

April 9: Facebook says it will begin informing users if their data was passed to Cambridge Analytica from today by dropping a notification into the News Feed.

It also offers a tool where people can do a manual check

April 9: Facebook also announces an initiative aimed at helping social science researchers gauge the product’s impact on elections and political events.

The initiative is funded by the Laura and John Arnold Foundation, Democracy Fund, the William and Flora Hewlett Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Omidyar Network, and the Alfred P. Sloan Foundation.

Facebook says the researchers will be given access to “privacy-protected datasets” — though it does not detail how people’s data will be robustly anonymized — and says it will not have any right or review or approval on research findings prior to publication.

Zuckerberg claims the election research commission will be “independent” of Facebook and will define the research agenda, soliciting research on the effects of social media on elections and democracy

April 10: Per its earlier announcement, Facebook begins blocking apps from accessing user data 90 days after non-use. It also rolls out the earlier trailed updates to its bug bounty program

A brief history of Facebook’s privacy hostility ahead of Zuckerberg’s testimony

The Facebook founder will be questioned by the Senate Judiciary and Senate Commerce Committees later today — in a session entitled “Facebook, Social Media Privacy, and the Use and Abuse of Data.”

Mark Zuckerberg is also due to testify before Congress on Wednesday — to be asked about the company’s use and protection of user data.

As we’ve pointed out already, his written testimony is pretty selective and self-serving in terms of what he does and doesn’t include in his version of events.

Indeed, in the face of the snowballing Cambridge Analytica data misuse scandal, the company’s leadership (see also: Sheryl Sandberg) has been quick to try to spin an idea that it was simply too “idealistic and optimistic” — and that ‘bad actors’ exploited its surfeit of goodwill.

This of course is pure fiction.

Facebook’s long history of privacy hostility should make that plain to any thinking person. As former FTC director David Vladeck wrote earlier this month: “Facebook can’t claim to be clueless about how this happened. The FTC consent decree put Facebook on notice.”

To be clear, that’s the 2011 FTC consent decree — ergo, a major regulatory privacy sanction that Facebook incurred well over six years ago.

Every Facebook privacy screw up since is either carelessness or intention.

Vladeck’s view is that Facebook’s actions were indeed calculated. “All of Facebook’s actions were calculated and deliberate, integral to the company’s business model, and at odds with the company’s claims about privacy and its corporate values,” he argues.

So we thought it would be helpful to compile an alternative timeline ahead of Zuckerberg’s verbal testimony, highlighting some curious details related to the Cambridge Analytica data misuse scandal — such as why Facebook hired (and apparently still employs) the co-director of the company that built the personality quiz app that “improperly shared” so much Facebook data with the controversial company — as well as detailing some of its other major privacy missteps over the years.

There are A LOT of these so forgive us if we’ve missed anything — and feel free to put any additions in the comments.

 

Facebook: An alternative timeline

February 2004 — Facebook is launched by Harvard College student Mark Zuckerberg

September 2006 — Facebook launches News Feed, broadcasting the personal details of Facebook users — including relationship changes — without their knowledge or consent. Scores of users protest at the sudden privacy intrusion. Facebook goes on to concede: “We really messed this one up… we did a bad job of explaining what the new features were and an even worse job of giving you control of them.”

November 2007 — Facebook launches a program called Beacon, injecting personal information such as users’ online purchases and video rentals on third party sites into the News Feed without their knowledge or consent. There’s another massive outcry — and a class action lawsuit is filed. Facebook eventually pays $9.5M to settle the lawsuit. It finally shutters the controversial program in 2009

May 2008 — a complaint is filed with the Privacy Commissioner of Canada concerning the “unnecessary and non-consensual collection and use of personal information by Facebook”. The following year the company is found to be “in contravention” of the country’s Personal Information Protection and Electronic Documents Act. Facebook is told to make changes to its privacy policy and tools — but the Commissioner is still expressing concerns at the end of 2009

February 2009 — Facebook revises its terms of service to state that users can’t delete their data when they leave the service and there’s another outcry. Backpeddling furiously in a subsequent conference call, Zuckerberg says: “We do not own user data, they own their data. We never intended to give that impression and we feel bad that we did”

November & December 2009 — Facebook again revises its privacy policy and the privacy settings for users and now, in a fell swoop, it makes a range of personal information public by default — available for indexing on the public web. We describe this as a privacy fiasco. Blogging critically about the company’s actions, the EFF also warns: “Major privacy settings are now set to share with everyone by default, in some cases without any user choice”

December 2009 — a complaint (and supplementary complaint) is filed by EPIC with the FTC about Facebook’s privacy settings and privacy policy, with the coalition of privacy groups asserting these are inconsistent with the site’s information sharing practices, and that Facebook is misleading users into believing they can still maintain control over their personal information. The FTC later writes a letter saying the complaint “raises issues of particular interest for us at this time”

April 2010 — four senators call on Facebook to change its policies after it announces a product called Instant Personalization — which automatically hands over some user data to certain third-party sites as soon as a person visits them. The feature has an opt-out but Facebook users are default opted in. “[T]his class of information now includes significant and personal data points that should be kept private unless the user chooses to share them,” the senators warn

May 2010 — following another user backlash against settings changes Facebook makes changes to its privacy controls yet again. “We’re really going to try not to have another backlash,” says Facebook’s VP of product Chris Cox. “If people say they want their stuff to be visible to friends only, it will apply to that stuff going forward”

May 2010 — EPIC complains again to the FTC, requesting an investigation. The watchdog quietly begins an investigation the following year

May 2010 — Facebook along with games developer Zynga is reported to the Norwegian data protection agency. The complaint focuses on app permissions, with the Consumer Council warning about “unreasonable and unbalanced terms and conditions”, and how Facebook users are unwittingly granting permission for personal data and content to be sold on

June 2011 — EPIC files another complaint to the FTC, focused on Facebook’s use of facial recognition technology to automatically tag users in photos uploaded to its platform

August 2011 — lawyer and privacy campaigner Max Schrems files a complaint against Facebook Ireland flagging its app permissions data sinkhole. “Facebook Ireland could not answer me which applications have accessed my personal data and which of my friends have allowed them to do so,” he writes. “Therefore there is practically no way how I could ever find out if a developer of an application has misused data it got from Facebook Ireland in some way”

November 2011 — Facebook settles an eight-count FTC complaint over deceptive privacy practices, agreeing to make changes opt-in going forward and to gain express consent from users to any future changes. It must also submit to privacy audits every two years for the next 20 years; bar access to content on deactivated accounts; and avoid misrepresenting the privacy or security of user data. The settlement with the FTC is finalized the following year. Facebook is not fined

December 2011 — Facebook agrees to make some changes to how it operates internationally following Schrems’ complaint leading to an audit of its operations by the Irish Data Protection Commission

September 2012 — Facebook turns off an automatic facial recognition feature in Europe following another audit by Ireland’s Data Protection Commission. The privacy watchdog also recommends Facebook tightens app permissions on its platform, including to close down developers’ access to friends data

April 2013 — Facebook launches Partner Categories: Further enriching the capabilities of its ad targeting platform by linking up with major data broker companies which hold aggregate pools of third party data, including information on people’s offline purchases. Five years later Facebook announces it’s ending this access, likely as one of the measures needed to comply with the EU’s updated privacy framework, GDPR

May 2014 — Facebook finally announces at its developer conference that it will be shutting down an API that let developers harvest users’ friends data without their knowledge or consent, initially for new developer users — giving existing developers a year-long window to continue sucking this data

May 2014 — Facebook only now switches off the public default for users’ photos and status updates, setting default visibility to ‘friends’

May 2014 — Cambridge University professor Aleksandr Kogan runs a pilot of a personality test app (called thisisyourdigitallife) on Facebook’s platform with around 10,000 users. His company, GSR, then signs a data-licensing contract with political consultancy Cambridge Analytica, in June 2014, to supply it with psychological profiles linked to US voters. Over the summer of 2014 the app is downloaded by around 270,000 Facebook users and ends up harvesting personal information on as many as 87 million people — the vast majority of whom would have not known or consented to data being passed

June 2014 — Facebook data scientists publish a study detailing the results of an experiment on nearly 700,000 users to determine whether showing them more positive or negative sentiment posts in the News Feed would affect their happiness levels (as deduced by what they posted). Consent had not been obtained from the Facebook users whose emotions were being experimenting on

February 2015 — a highly critical report by Belgium’s data watchdog examining another updated Facebook privacy policy asserts the company is breaching EU privacy law including by failing to obtain valid consent from users for processing their data

May 2015 — Facebook finally shutters its friends API for existing developers such as Kogan — but he has already been able to use this to suck out and pass on a massive cache of Facebook data to Cambridge Analytica

June 2015 — the Belgian privacy watchdog files a lawsuit against Facebook over the tracking of non-users via social plugins. Months later the court agrees. Facebook says it will appeal

November 2015Facebook hires Joseph Chancellor, the other founding director of GSR, to work as a quantitative social psychologist. Chancellor is still listed as a UX researcher at Facebook Research

December 2015 — the Guardian publishes a story detailing how the Ted Cruz campaign had paid UK academics to gather psychological profiles about the US electorate using “a massive pool of mainly unwitting US Facebook users built with an online survey”. After the story is published Facebook tells the newspaper it is “carefully investigating this situation” regarding the Cruz campaign

February 2016 — the French data watchdog files a formal order against Facebook, including for tracking web browsing habits and collecting sensitive user data such as political views without explicit consent

August 2016 — Facebook-owned WhatsApp announces a major privacy U-turn, saying it will start sharing user data with its parent company — including for marketing and ad targeting purposes. It offers a time-bound opt-out for the data-sharing but pushes a pre-ticked opt-in consent screen to users

November 2016 — facing the ire of regulators in Europe Facebook agrees to suspend some of the data-sharing between WhatsApp and Facebook (this regional ‘pause’ continues to this day). The following year the French data watchdog also puts the company on formal warning that data transfers it is nonetheless carrying out — for ‘business intelligence’ purposes — still lack a legal basis

November 2016 — Zuckerberg describes the idea that fake news on Facebook’s platform could have influenced the outcome of the US election as “a pretty crazy idea” — a comment he later says he regrets making, saying it was “too flippant” and a mistake

May 2017 –– Facebook is fined $122M in Europe for providing “incorrect or misleading” information to competition regulators who cleared its 2014 acquisition of WhatsApp. It had told them it could not automatically match user accounts between the two platforms, but two years later announced it would indeed be linking accounts

September 2017Facebook is fined $1.4M by Spain’s data watchdog, including for collecting data on users ideology and tracking web browsing habits without obtaining adequate consent. Facebook says it will appeal

October 2017 — Facebook says Russian disinformation distributed via its platform may have reached as many as 126 million Facebook users — upping previous estimates of the reach of ‘fake news’. It also agrees to release the Russian ads to Congress, but refuses to make them public

February 2018 — Belgian courts again rule Facebook’s tracking of non-users is illegal. The company keeps appealing

March 2018the Guardian and New York Times publish fresh revelations, based on interviews with former Cambridge Analytica employee Chris Wylie, suggesting as many as 50M Facebook users might have had their information passed to Cambridge Analytica without their knowledge or consent. Facebook confirms 270,000 people downloaded Kogan’s app. It also finally suspends the account of Cambridge Analytica and its affiliate, SCL, as well as the accounts of Kogan and Wylie

March 21, 2018 — Zuckerberg gives his first response to the revelations about how much Facebook user data was passed to Cambridge Analytica — but omits to explain why the company delayed investigating

March 2018 — the FTC confirms it is (re)investigating Facebook’s privacy practices in light of the Cambridge Analytica scandal and the company’s prior settlement. Facebook also faces a growing number of lawsuits

March 2018 — Facebook outs new privacy controls, as part of its compliance with the EU’s incoming GDPR framework, consolidating settings from 20 screens to just one. However it will not confirm whether all privacy changes will apply for all Facebook users — leading to a coalition of consumer groups to call for a firm commitment from the company to make the new standard its baseline for all services

April 2018 — Facebook also reveals that somewhere between 1BN and 2BN users have had their public Facebook information scraped via a now disabled feature which allowed people to look up users by inputting a phone number or email. The company says it discovered the feature was abused by “malicious actors”, writing: “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way”

April 2018 — the UK’s data watchdog confirms Facebook is one of 30 companies it’s investigating as part of an almost year-long probe into the use of personal data and analytics for political targeting

April 2018 — Facebook announces it has shut down a swathe of Russian troll farm accounts

April 2018 — Zuckerberg agrees to give testimony in front of US politicians — but continues to ignore calls to appear before UK politicians to answer questions about the role of fake news on its platform and the potential use of Facebook data in the UK’s Brexit referendum

April 2018 — the Canadian and British Columbian privacy watchdogs announce they are combining existing investigations into Facebook and a local data firm, AggregateIQ, which has been linked to Cambridge Analytica. The next day Facebook reportedly suspends AggregateIQ‘s account on its platform

April 2018 — Facebook says it has started telling affected users whether their information was improperly shared with Cambridge Analytica

Facebook data misuse scandal affects “substantially” more than 50M, claims Wylie

Chris Wylie, the former Cambridge Analytica employee turned whistleblower whose revelations about Facebook data being misused for political campaigning has wiped billions off the share price of the company in recent days and led to the FTC opening a fresh investigation, has suggested the scale of the data leak is substantially larger than has been reported so far.

Giving evidence today, to a UK parliamentary select committee that’s investigating the use of disinformation in political campaigning, Wylie said: “The 50 million number is what the media has felt safest to report — because of the documentation that they can rely on — but my recollection is that it was substantially higher than that. So my own view is it was much more than 50M.”

We’ve reached out to Facebook about Wylie’s claim — but at the time of writing the company had not provided a response.

“There were several iterations of the Facebook harvesting project,” Wylie also told the committee, fleshing out the process through which he says users’ data was obtained by CA. “It first started as a very small pilot — firstly to see, most simply, is this data matchable to an electoral register… We then scaled out slightly to make sure that [Cambridge University professor Alexsandr Kogan] could acquire data in the speed that he said he could [via a personality test app called thisisyourdigitallife deployed via Facebook’s platform]. So the first real pilot of it was a sample of 10,000 people who joined the app — that was in late May 2014.

“That project went really well and that’s when we signed a much larger contract with GSR [Kogan’s company] in the first week of June… 2014. Where the app went out and collected surveys and people joined the app throughout the summer of 2014.”

The personal information the app was able to obtain via Facebook formed the “foundational dataset” underpinning both CA and its targeting models, according to Wylie.

“This is what built the company,” he claimed. “This was the foundational dataset that then was modeled to create the algorithms.”

Facebook has previously confirmed 270,000 people downloaded Kogan’s app — a data harvesting route which, thanks to the lax structure of Facebook’s APIs at the time, enabled the foreign political consultancy firm to acquire information on more than 50 million Facebook users, according to the Observer, the vast majority of whom would have had no idea their data had been passed to CA because they were never personally asked to consent to it.

Instead, their friends were ‘consenting’ on their behalf — likely also without realizing.

Earlier this month, after the latest CA revelations broke, the DCMS committee asked Facebook founder Mark Zuckerberg to answer their questions in person but he has so far declined their summons. Though it has just been reported that he may finally appear before Congress to face questions about how users’ data has been so widely misused via his platform.

In a letter to the DCMS committee, dated yesterday, Facebook said it is working with regulators in different countries to confirm exactly how many local users have been affected by data leak.

It adds that around 1 per cent of the users whose data was illicitly obtained by CA were European Union users. This small proportion seems unsurprising, given CA was working for the Trump campaign — and therefore aiming to gather data on Americans for 2016 presidential campaign targeting purposes.  EU citizens’ data wouldn’t have had any relevance to that.

“There will be two sets of data,” Facebook writes in its letter to the committee discussing the data passed to CA. “The first is people who downloaded the app, and the second is the number of friends of those people who have their privacy settings set in such a way that the app could see some of their data. This second figure will be much higher than the first and we will look to provide both broken down by country as soon as we can.”

Facebook’s privacy settings have caused major regulatory and legal headaches for the company over the years. In 2012, for example, Facebook settled with the FTC over charges it had deceived users by “telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public”.

And in 2011 and 2012, following a legal complaint by European privacy campaigner and lawyer Max Schrems, Facebook was urged by the Irish Data Protection Commissioner to tighten app permissions to avoid exactly the kind of friends data leakage that has now scaled into this major privacy scandal.

Instead, Facebook put off tightening up API permissions until as late as mid 2015 — thereby giving CA a window of opportunity to pull massive amounts of Facebook user data ahead of the 2016 US presidential election.

When CA’s (currently suspended) CEO, Alexander Nix, appeared before the DCMS committee in February he was asked whether it worked with GSR and what use it made of GSR data. At that time Nix claimed CA had not used any GSR data.

The company is continuing to push this line, claiming in a series of tweets today that while it paid $500k for GSR data it subsequently “deleted the data”. It further claims it used alternative data sources and data sets to build its models. “Our algorithms and models bear no trace of it,” it has also tweeted re: the GSR data.

(Following the session, CA has also now put out a longer response statement, refuting multiple parts of Wylie’s testimony and claiming he has “misrepresented himself and the company”. In this it also claims: “Cambridge Analytica does not hold any GSR data or any data derived from GSR data. We have never shared the GSR data with Aggregate IQ [another alleged affiliate company], Palantir or any other entity. Cambridge Analytica did not use any GSR data in the work that we did for the Donald J. Trump for President campaign.”)

Asked by the committee about Nix’s earlier, contradicting testimony, Wylie wondered out loud why CA spent “the better part of $1M on GSR” — pointing also to “copious amounts of email” and other documents he says he has provided to the committee as additional evidence, including invoicing and “match rates on the data”.

“That’s just not true,” he asserted of CA’s claim not to have used GSR (and therefore Facebook) data.

Kogan himself has previously claimed he was unaware exactly what CA wanted to use the data for. “I knew it was for political consulting but beyond that no idea,” he told Anderson Cooper in a TV interview broadcast on March 21, claiming also that he did not know that CA was working for Trump or whether they even used the data his app had gathered.

Kogan also suggested the data he had been able to gather was not very accurate at an individual level — claiming it would only be useful in aggregate to, for example, “understand the personality of New Yorkers”.

Wylie was asked by the committee how the data was used by CA. Giving an example he says the company’s approach was to target different people for advertising based on their “dispositional attributes and personality traits” — traits it sought to predict via patterns in the data.

He said:

For example, if you are able to create profiling algorithms that can predict certain traits — so let’s say a high degree of openness and a high degree of neuroticism — and when you look at that profiles that’s the profile of a person who’s more prone towards conspiratorial thinking, for example, they’re open enough to kind of connect to things that may not really seem reasonable to your average person. And they’re anxious enough and impulse enough to start clicking and reading and looking at things — and so if you can create a psychological profile of a type of person who is more prone to adopting certain forms of ideas, conspiracies for example, you can identify what that person looks like in data terms. You can then go out and predict how likely somebody is going to be to adopt more conspiratorial messaging. And then advertise or target them with blogs or websites or various — what everyone now calls fake news — so that they start seeing all of these ideas, or all of these stories around them in their digital environment. They don’t see it when they watch CNN or NBC or BBC. And they start to go well why is that everyone’s talking about this online? Why is it that I’m seeing everything here but the mainstream media isn’t talking about [it]… Not everyone’s going to adopt that — so that advantage of using profiling is you can find the specific group of people who are more prone to adopting that idea as your early adopters… So if you can find those people in your datasets because you know what they look like in terms of data you can catalyze a trend over time. But you first need to find what those people look like.

“That was the basis of a lot of our research [at CA and sister company SCL],” he added. “How far can we go with certain types of people. And who is it that we would need to target with what types of messaging.”

Wylie told the committee that Kogan’s company was set up exclusively for the purposes of obtaining data for CA, and said the firm chose to work with Kogan because another professor it had approached first had asked for a substantial payment up front and a 50% equity share — whereas he had agreed to work on the project to obtain the data first, and consider commercial terms later.

“The deal was that [Kogan] could keep all the data and do research or whatever he wanted to do with is and so for him it was appealing because you had a company that was the equivalent of no academic grant could compete with the amount of money that we could spend on it, and also we didn’t have to go through all the compliance stuff,” added Wylie. “So we could literally just start next week and pay for whatever you want. So my impression at the time was that for an academic that would be quite appealing.”

 

“All kinds of people [had] access to the data”

Another claim made by Wylie during the session was that the secretive US big data firm Palantir helped CA build models off of the Facebook data — although he also said there was no formal contract in place between the two firms.

Wylie said Palantir was introduced to CA’s Nix by Sophie Schmidt, Google chairman Eric Schmidt’s daughter, during an internship at CA.

“We actually had several meetings with Palantir whilst I was there,” claimed Wylie. “And some of the documentation that I’ve also provided to the committee… [shows] there were senior Palantir employees that were also working on the Facebook data.”

The VC-backed firm is known for providing government, finance, healthcare and other organizations with analytics, security and other data management solutions.

“That was not an official contract between Palantir and Cambridge Analytica but there were Palantir staff who would come into the office and work on the data,” Wylie added. “And we would go and meet with Palantir staff at Palantir. So, just to clarify, Palantir didn’t officially contract with Cambridge Analytica. But there were Palantir staff who helped build the models that we were working on.”

Contacted for comment on this allegation a Palantir spokesperson refuted it entirely — providing TechCrunch with this emailed statement: “Palantir has never had a relationship with Cambridge Analytica nor have we ever worked on any Cambridge Analytica data.”

The committee went on to ask Wylie why he was coming forward to tell this story now, given his involvement in building the targeting technologies — and therefore also his interests in the related political campaigns.

Wylie responded by saying that he had grown increasingly uncomfortable with CA during his time working there and with the methods being used.

“Nothing good has come from Cambridge Analytica,” he added. “It’s not a legitimate business.”

In a statement put out on its Twitter yesterday, CA’s acting CEO Alex Tayler sought to distance the firm from Wylie and play down his role there, claiming: “The source of allegations is not a whistleblower or a founder of the company. He was at the company for less than a year, after which he was made the subject of restraining undertakings to prevent his misuse of the company’s intellectual property.”

Asked whether he’s received any legal threats since making his allegations public, Wylie said the most legal pushback he’s received so far has come from Facebook, rather than CA.

“It’s Facebook who’s most upset about this story,” he told the committee. “They’ve sent some fairly intimidating legal correspondence. They haven’t actually taken action on that… They’ve gone silent, they won’t talk to me anymore.

“But I do anticipate some robust pushback from Cambridge Analytica because this is sort of an existential crisis for them,” he added. “But I think that I have a fairly robust public interest defense to breaking that NDA and that undertaking of confidentiality [that he previously signed with CA].”

The committee also pressed Wylie on whether he himself had had access to the Facebook data he claims CA used to build its targeting models. Wylie said that he had, though he claims he deleted his copy of the data “some time in 2015”.

During the testimony Wylie also suggested Facebook might have found out about the GSL data harvesting project as early as July 2014 — because he says Kogan told him, around that time, that he had spoken to Facebook engineers after his app’s data collection rate had been throttled by the platform.

“He told me that he had a conversation with some engineers at Facebook,” said Wylie. “So Facebook would have known from that moment about the project because he had a conversation with Facebook’s engineers — or at least that’s what he told me… Facebook’s account of it is that they had no idea until the Guardian first reported it at the end of 2015 — and then they decided to send out letters. They sent letters to me in August 2016 asking do you know where this data might be, or was it deleted?

“It’s interesting that… the date of the letter is the same month that Cambridge Analytica officially joined the Trump campaign. So I’m not sure if Facebook was genuinely concerned about the data or just the optics of y’know now this firm is not just some random firm in Britain, it’s now working for a presidential campaign.”

We also asked Facebook if it had any general response to Wylie’s testimony but at the time of writing the company had not responded to this request for comment either.

Did Facebook make any efforts to retrieve or delete data, the committee also asked Wylie. “No they didn’t,” he replied. “Not to my knowledge. They certainly didn’t with me — until after I went public and then they made me suspect number one despite the fact the ICO [UK’s Information Commissioner’s Office] wrote to me and to Facebook saying that no I’ve actually given over everything to the authorities.”

“I suspect that when Facebook looked at what happened in 2016… they went if we make a big deal of this this might be optically not the best thing to make a big fuss about,” he said. “So I don’t think they pushed it in part because if you want to really investigate a large data breach that’s going to get out and that might cause problems. So my impression was they wanted to push it under the rug.”

“All kinds of people [had] access to the data,” he added. “It was everywhere.”