Tag Archives: Cambridge Analytica

Facebook really doesn’t want you to read these emails

Oh hey, y’all, it’s Friday! It’s August! Which means it’s a great day for Facebook to drop a little news it would prefer you don’t notice. News that you won’t find a link to on the homepage of Facebook’s Newsroom — which is replete with colorfully illustrated items it does want you to read (like the puffed up claim that “Now You Can See and Control the Data That Apps and Websites Share With Facebook”).

The blog post Facebook would really prefer you didn’t notice is tucked away in a News sub-section of this website — where it’s been confusingly entitled: Document Holds the Potential for Confusion. And has an unenticing grey image of a document icon to further put you off — just in case you happened to stumble on it after all. It’s almost as if Facebook is saying “definitely don’t click here“…

ca update grey

So what is Facebook trying to bury in the horse latitudes of summer?

An internal email chain, starting September 2015, which shows a glimpse of what Facebook’s own staff knew about the activity of Cambridge Analytica prior to The Guardian’s December 2015 scoop — when the newspaper broke the story that the controversial (and now defunct) data analytics firm, then working for Ted Cruz’s presidential campaign, had harvested data on millions of Facebook users without their knowledge and/or consent, and was using psychological insights gleaned from the data to target voters.

Facebook founder Mark Zuckerberg’s official timeline of events about what he knew when vis-à-vis the Cambridge Analytica story has always been that his knowledge of the matter dates to December 2015 — when the Guardian published its story.

But the email thread Facebook is now releasing shows internal concerns being raised almost two months earlier.

This chimes with previous (more partial) releases of internal correspondence pertaining to Cambridge Analytica  — which have also come out as a result of legal actions (and which we’ve reported on previously here and here).

If you click to download the latest release, which Facebook suggests it ‘agreed’ with the District of Columbia Attorney General to “jointly make public”, you’ll find a redacted thread of emails in which Facebook staffers raise a number of platform policy violation concerns related to the “political partner space”, writing September 29, 2015, that “many companies seem to be on the edge- possibly over”.

Cambridge Analytica is first identified by name — when it’s described by a Facebook employee as “a sketchy (to say the least) data modelling company that has penetrated our market deeply” — on September 22, 2015, per this email thread. It is one of many companies the staffer writes are suspected of scraping user data — but is also described as “the largest and most aggressive on the conservative side”.

Screenshot 2019 08 23 at 16.34.15

On September 30, 2015, a Facebook staffer responds to this, asking for App IDs and app names for the apps engaging in scraping user data — before writing: “My hunch is that these apps’ data-scraping is likely non-compliant”.

“It would be very difficult to engage in data-scraping activity as you described while still being compliant with FPPs [Facebook Platform Policies],” this person adds.

Cambridge Analytica gets another direct mention (“the Cambridge app”) on the same day. A different Facebook staffer then chips in with a view that “it’s very likely these companies are not in violation of any of our terms” — before asking for “concrete examples” and warning against calling them to ask questions unless “red flags” have been confirmed.

On October 13, a Facebook employee chips back into the thread with the view that “there are likely a few data policy violations here”.

The email thread goes on to discuss concerns related to additional political partners and agencies using Facebook’s platform at that point, including ForAmerica, Creative Response Concepts, NationBuilder and Strategic Media 21. Which perhaps explains Facebook’s lack of focus on CA — if potentially “sketchy” political activity was apparently widespread.

On December 11 another Facebook staffer writes to ask for an expedited review of Cambridge Analytica — saying it’s “unfortunately… now a PR issue”, i.e. as a result of the Guardian publishing its article.

The same day a Facebook employee emails to say Cambridge Analytica “is hi pri at this point”, adding: “We need to sort this out ASAP” — a month and a half after the initial concern was raised.

Also on December 11 a staffer writes that they had not heard of GSR, the Cambridge-based developer CA hired to extract Facebook user data, before the Guardian article named it. But other Facebook staffers chip in to reveal personal knowledge of the psychographic profiling techniques deployed by Cambridge Analytica and GSR’s Dr Aleksandr Kogan, with one writing that Kogan was their postdoc supervisor at Cambridge University.

Another says they are friends with Michal Kosinsky, the lead author of a personality modelling paper that underpins the technique used by CA to try to manipulate voters — which they described as “solid science”.

A different staffer also flags the possibility that Facebook has worked with Kogan — ironically enough “on research on the Protect & Care team” — citing the “Wait, What thread” and another email, neither of which appear to have been released by Facebook in this ‘Exhibit 1’ bundle.

So we can only speculate on whether Facebook’s decision — around September 2015 — to hire Kogan’s GSR co-founder, Joseph Chancellor, appears as a discussion item in the ‘Wait, What’ thread…

Putting its own spin on the release of these internal emails in a blog post, Facebook sticks to its prior line that “unconfirmed reports of scraping” and “policy violations by Aleksandr Kogan” are two separate issues, writing:

We believe this document has the potential to confuse two different events surrounding our knowledge of Cambridge Analytica. There is no substantively new information in this document and the issues have been previously reported. As we have said many times, including last week to a British parliamentary committee, these are two distinct issues. One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica. This document proves the issues are separate; conflating them has the potential to mislead people.

It has previously also referred to the internal concerns raised about CA as “rumors”.

“Facebook was not aware that Kogan sold data to Cambridge Analytica until December 2015. That is a fact that we have testified to under oath, that we have described to our core regulators, and that we stand by today,” it adds now.

It also claims that after an engineer responded to concerns that CA was scraping data and looked into it they were not able to find any such evidence. “Even if such a report had been confirmed, such incidents would not naturally indicate the scale of the misconduct that Kogan had engaged in,” Facebook adds.

The company has sought to dismiss the privacy litigation brought against it by the District of Columbia which is related to the Cambridge Analytica scandal — but has been unsuccessful in derailing the case thus far.

The DC complaint alleges that Facebook allowed third-party developers to access consumers’ personal data, including information on their online behavior, in order to offer apps on its platform, and that it failed to effectively oversee and enforce its platform policies by not taking reasonable steps to protect consumer data and privacy. It also alleges Facebook failed to inform users of the CA breach.

Facebook has also failed to block another similar lawsuit that’s been filed in Washington, DC by Attorney General Karl Racine — which has alleged lax oversight and misleading privacy standards.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

Former Cambridge Analytica director, Brittany Kaiser, dumps more evidence of Brexit’s democratic trainwreck

A UK parliamentary committee has published new evidence fleshing out how membership data was passed from UKIP, a pro-Brexit political party, to Leave.EU, a Brexit supporting campaign active in the 2016 EU referendum — via the disgraced and now defunct data company, Cambridge Analytica.

In evidence sessions last year, during the DCMS committee’s enquiry into online disinformation, it was told by both the former CEO of Cambridge Analytica, and the main financial backer of the Leave.EU campaign, the businessman Arron Banks, that Cambridge Analytica did no work for the Leave.EU campaign.

Documents published today by the committee clearly contradict that narrative — revealing internal correspondence about the use of a UKIP dataset to create voter profiles to carry out “national microtargeting” for Leave.EU.

They also show CA staff raising concerns about the legality of the plan to model UKIP data to enable Leave.EU to identify and target receptive voters with pro-Brexit messaging.

The UK’s 2016 in-out EU referendum saw the voting public narrowing voting to leave — by 52:48.

New evidence from Brittany Kaiser

The evidence, which includes emails between key Cambridge Analytica, employees of Leave.EU and UKIP, has been submitted to the DCMS committee by Brittany Kaiser — a former director of CA (who you may just have seen occupying a central role in Netflix’s The Great Hack documentary, which digs into links between the Trump campaign and the Brexit campaign).

“As you can see with the evidence… chargeable work was completed for UKIP and Leave.EU, and I have strong reasons to believe that those datasets and analysed data processed by Cambridge Analytica as part of a Phase 1 payable work engagement… were later used by the Leave.EU campaign without Cambridge Analytica’s further assistance,” writes Kaiser in a covering letter to committee chair, Damian Collins, summarizing the submissions.

Kaiser gave oral evidence to the committee at a public hearing in April last year.

At the time she said CA had been undertaking parallel pitches for Leave.EU and UKIP — as well as for two insurance brands owned by Banks — and had used membership survey data provided by UKIP to built a model for pro-brexit voter personality types, with the intention of it being used “to benefit Leave.EU”.

“We never had a contract with Leave.EU. The contract was with the UK Independence party for the analysis of this data, but it was meant to benefit Leave.EU,” she said then.

The new emails submitted by Kaiser back up her earlier evidence. They also show there was discussion of drawing up a contract between CA, UKIP and Leave.EU in the fall before the referendum vote.

In one email — dated November 10, 2015 — CA’s COO & CFO, Julian Wheatland, writes that: “I had a call with [Leave.EU’s] Andy Wigmore today (Arron’s right hand man) and he confirmed that, even though we haven’t got the contract with the Leave written up, it’s all under control and it will happen just as soon as [UKIP-linked lawyer] Matthew Richardson has finished working out the correct contract structure between UKIP, CA and Leave.”

Another item Kaiser has submitted to the committee is a separate November email from Wigmore, inviting press to a briefing by Leave.EU — entitled “how to win the EU referendum” — an event at which Kaiser gave a pitch on CA’s work. In this email Wigmore describes the firm as “the worlds leading target voter messaging campaigners”.

In another document, CA’s Wheatland is shown in an email thread ahead of that presentation telling Wigmore and Richardson “we need to agree the line in the presentations next week with regards the origin of the data we have analysed”.

“We have generated some interesting findings that we can share in the presentation, but we are certain to be asked where the data came from. Can we declare that we have analysed UKIP membership and survey data?” he then asks.

UKIP’s Richardson replies with a negative, saying: “I would rather we didn’t, to be honest” — adding that he has a meeting with Wigmore to discuss “all of this”, and ending with: “We will have a plan by the end of that lunch, I think”.

In another email, dated November 10, sent to multiple recipients ahead of the presentation, Wheatland writes: “We need to start preparing Brittany’s presentation, which will involve working with some of the insights David [Wilkinson, CA’s chief data scientist] has been able to glean from the UKIP membership data.”

He also asks Wilkinson if he can start to “share insights from the UKIP data” — as well as asking “when are we getting the rest of the data?”. (In a later email, dated November 16, Wilkinson shares plots of modelled data with Kaiser — apparently showing the UKIP data now segmented into four blocks of brexit supporters, which have been named: ‘Eager activist’; ‘Young reformer’; ‘Disaffected Tories’; and ‘Left behinds’.)

In the same email Wheatland instructs Jordanna Zetter, an employee of CA’s parent company SCL, to brief Kaiser on “how to field a variety of questions about CA and our methodology, but also SCL. Rest of the world, SCL Defence etc” — asking her to liaise with other key SCL/CA staff to “produce some ‘line to take’ notes”.

Another document in the bundle appears to show Kaiser’s talking points for the briefing. These make no mention of CA’s intention to carry out “national microtargeting” for Leave.EU — merely saying it will conduct “message testing and audience segmentation”.

“We will be working with the campaign’s pollsters and other vendors to compile all the data we have available to us,” is another of the bland talking points Kaiser was instructed to feed to the press.

“Our team of data scientists will conduct deep-dive analysis that will enable us to understand the electorate better than the rival campaigns,” is one more unenlightening line intended for public consumption.

But while CA was preparing to present the UK media with a sanitized false narrative to gloss over the individual voter targeting work it actually intended to carry out for Leave.EU, behind the scenes concerns were being raised about how “national microtargeting” would conflict with UK data protection law.

Another email thread, started November 19, highlights internal discussion about the legality of the plan — with Wheatland sharing “written advice from Queen’s Counsel on the question of how we can legally process data in the UK, specifically UKIP’s data for Leave.eu and also more generally”. (Although Kaiser has not shared the legal advice itself.)

Wilkinson replies to this email with what he couches as “some concerns” regarding shortfalls in the advice, before going into detail on how CA is intending to further process the modelled UKIP data in order to individually microtarget brexit voters — which he suggests would not be legal under UK data protection law “as the identification of these people would constitute personal data”.

He writes:

I have some concerns about what this document says is our “output” – points 22 to 24. Whilst it includes what we have already done on their data (clustering and initial profiling of their members, and providing this to them as summary information), it does not say anything about using the models of the clusters that we create to extrapolate to new individuals and infer their profile. In fact it says that our output does not identify individuals. Thus it says nothing about our microtargeting approach typical in the US, which I believe was something that we wanted to do with leave eu data to identify how each their supporters should be contacted according to their inferred profile.

For example, we wouldn’t be able to show which members are likely to belong to group A and thus should be messaged in this particular way – as the identification of these people would constitute personal data. We could only say “group A typically looks like this summary profile”.

Wilkinson ends by asking for clarification ahead of a looming meeting with Leave.EU, saying: “It would be really useful to have this clarified early on tomorrow, because I was under the impression it would be a large part of our product offering to our UK clients.” [emphasis ours]

Wheatland follows up with a one line email, asking Richardson to “comment on David’s concern” — who then chips into the discussion, saying there’s “some confusion at our end about where this data is coming from and going to”.

He goes on to summarize the “premises” of the advice he says UKIP was given regarding sharing the data with CA (and afterwards the modelled data with Leave.EU, as he implies is the plan) — writing that his understanding is that CA will return: “Analysed Data to UKIP”, and then: “As the Analysed Dataset contains no personal data UKIP are free to give that Analysed Dataset to anyone else to do with what they wish. UKIP will give the Analysed Dataset to Leave.EU”.

“Could you please confirm that the above is correct?” Richardson goes on. “Do I also understand correctly that CA then intend to use the Analysed Dataset and overlay it on Leave.EU’s legitimately acquired data to infer (interpolate) profiles for each of their supporters so as to better control the messaging that leave.eu sends out to those supporters?

“Is it also correct that CA then intend to use the Analysed Dataset and overlay it on publicly available data to infer (interpolate) which members of the public are most likely to become Leave.EU supporters and what messages would encourage them to do so?

“If these understandings are not correct please let me know and I will give you a call to discuss this.”

About half an hour later another SCL Group employee, Peregrine Willoughby-Brown, joins the discussion to back up Wilkinson’s legal concerns.

“The [Queen’s Counsel] opinion only seems to be an analysis of the legality of the work we have already done for UKIP, rather than any judgement on whether or not we can do microtargeting. As such, whilst it is helpful to know that we haven’t already broken the law, it doesn’t offer clear guidance on how we can proceed with reference to a larger scope of work,” she writes without apparent alarm at the possibility that the entire campaign plan might be illegal under UK privacy law.

“I haven’t read it in sufficient depth to know whether or not it offers indirect insight into how we could proceed with national microtargeting, which it may do,” she adds — ending by saying she and a colleague will discuss it further “later today”.

It’s not clear whether concerns about the legality of the microtargeting plan derailed the signing of any formal contract between Leave.EU and CA — even though the documents imply data was shared, even if only during the scoping stage of the work.

“The fact remains that chargeable work was done by Cambridge Analytica, at the direction of Leave.EU and UKIP executives, despite a contract never being signed,” writes Kaiser in her cover letter to the committee on this. “Despite having no signed contract, the invoice was still paid, not to Cambridge Analytica but instead paid by Arron Banks to UKIP directly. This payment was then not passed onto Cambridge Analytica for the work completed, as an internal decision in UKIP, as their party was not the beneficiary of the work, but Leave.EU was.”

Kaiser has also shared a presentation of the UKIP survey data, which bears the names of three academics: Harold Clarke, University of Texas at Dallas & University of Essex; Matthew Goodwin, University of Kent; and Paul Whiteley, University of Essex, which details results from the online portion of the membership survey — aka the core dataset CA modelled for targeting Brexit voters with the intention of helping the Leave.EU campaign.

(At a glance, this survey suggests there’s an interesting analysis waiting to be done of the choice of target demographics for the current blitz of campaign message testing ads being run on Facebook by the new (pro-brexit) UK prime minister Boris Johnson and the core UKIP demographic, as revealed by the survey data… )

Call for Leave.EU probe to be reopened

Ian Lucas, MP, a member of the DCMS committee has called for the UK’s Electoral Commission to re-open its investigation into Leave.EU in view of “additional evidence” from Kaiser.

We reached out to the Electoral Commission to ask if it will be revisiting the matter.

An Electoral Commission spokesperson told us: “We are considering this new information in relation to our role regulating campaigner activity at the EU referendum. This relates to the 10 week period leading up to the referendum and to campaigning activity specifically aimed at persuading people to vote for a particular outcome.

“Last July we did impose significant penalties on Leave.EU for committing multiple offences under electoral law at the EU Referendum, including for submitting an incomplete spending return.”

Last year the Electoral Commission also found that the official Vote Leave Brexit campaign broke the law by breaching election campaign spending limits. It channelled money to a Canadian data firm linked to Cambridge Analytica to target political ads on Facebook’s platform, via undeclared joint working with a youth-focused Brexit campaign, BeLeave.

Six months ago the UK’s data watchdog also issued fines against Leave.EU and Banks’ insurance company, Eldon Insurance — having found what it dubbed as “serious” breaches of electronic marketing laws, including the campaign using insurance customers’ details to unlawfully to send almost 300,000 political marketing messages.

A spokeswoman for the ICO told us it does not have a statement on Kaiser’s latest evidence but added that its enforcement team “will be reviewing the documents released by DCMS”.

The regulator has been running a wider enquiry into use of personal data for social media political campaigning. And last year the information commissioner called for an ethical pause on its use — warning that trust in democracy risked being undermined.

And while Facebook has since applied a thin film of ‘political ads’ transparency to its platform (which researches continue to warn is not nearly transparent enough to quantify political use of its ads platform), UK election campaign laws have yet to be updated to take account of the digital firehoses now (il)liberally shaping political debate and public opinion at scale.

It’s now more than three years since the UK’s shock vote to leave the European Union — a vote that has so far delivered three years of divisive political chaos, despatching two prime ministers and derailing politics and policymaking as usual.

Leave.EU

Many questions remain over a referendum that continues to be dogged by scandals — from breaches of campaign spending; to breaches of data protection and privacy law; and indeed the use of unregulated social media — principally Facebook’s ad platform — as the willing conduit for distributing racist dogwhistle attack ads and political misinformation to whip up anti-EU sentiment among UK voters.

Dark money, dark ads — and the importing of US style campaign tactics into UK, circumventing election and data protection laws by the digital platform backdoor.

This is why the DCMS committee’s preliminary report last year called on the government to take “urgent action” to “build resilience against misinformation and disinformation into our democratic system”.

The very same minority government, struggling to hold itself together in the face of Brexit chaos, failed to respond to the committee’s concerns — and has now been replaced by a cadre of the most militant Brexit backers, who are applying their hands to the cheap and plentiful digital campaign levers.

The UK’s new prime minister, Boris Johnson, is demonstrably doubling down on political microtargeting: Appointing no less than Dominic Cummings, the campaign director of the official Vote Leave campaign, as a special advisor.

At the same time Johnson’s team is firing out a flotilla of Facebook ads — including ads that appear intended to gather voter sentiment for the purpose of crafting individually targeted political messages for any future election campaign.

So it’s full steam ahead with the Facebook ads…

Boris Facebook ads

Yet this ‘democratic reset’ is laid right atop the Brexit trainwreck. It’s coupled to it, in fact.

Cummings worked for the self same Vote Leave campaign that the Electoral Commission found illegally funnelled money — via Cambridge Analytica-linked Canadian data firm AggregateIQ — into a blitz of microtargeted Facebook ads intended to sway voter opinion.

Vote Leave also faced questions over its use of Facebook-run football competition promising a £50M prize-pot to fans in exchange for handing over a bunch of personal data ahead of the referendum, including how they planned to vote. Another data grab wrapped in fancy dress — much like GSR’s thisisyourlife quiz app that provided the foundational dataset for CA’s psychological voter profiling work on the Trump campaign.

The elevating of Cummings to be special adviser to the UK PM represents the polar opposite of an ‘ethical pause’ in political microtargeting.

Make no mistake, this is the Brexit campaign playbook — back in operation, now with full-bore pedal to the metal. (With his hands now on the public purse, Johnson has pledged to spend £100M on marketing to sell a ‘no deal Brexit’ to the UK public.)

Kaiser’s latest evidence may not contain a smoking bomb big enough to blast the issue of data-driven and tech giant-enabled voter manipulation into a mainstream consciousness, where it might have the chance to reset the political conscience of a nation — but it puts more flesh on the bones of how the self-styled ‘bad boys of Brexit’ pulled off their shock win.

In The Great Hack the Brexit campaign is couched as the ‘petri dish’ for the data-fuelled targeting deployed by the firm in the 2016 US presidential election — which delivered a similarly shock victory for Trump.

If that’s so, these latest pieces of evidence imply a suggestively close link between CA’s experimental modelling of UKIP supporter data, as it shifted gears to apply its dark arts closer to home than usual, and the models it subsequently built off of US citizens’ data sucked out of Facebook. And that in turn goes some way to explaining the cosiness between Trump and UKIP founder Nigel Farage…

 

Kaiser ends her letter to DCMS writing: “Given the enormity of the implications of earlier inaccurate conclusions by different investigations, I would hope that Parliament reconsiders the evidence submitted here in good faith. I hope that these ten documents are helpful to your research and furthering the transparency and truth that your investigations are seeking, and that the people of the UK and EU deserve”.

Banks and Wigmore have responded to the publication in their usual style, with a pair of dismissive tweets — questioning Kaiser’s motives for wanting the data to be published and throwing shade on how the evidence was obtained in the first place.

The Great Hack tells us data corrupts 

This week professor David Carroll, whose dogged search for answers to how his personal data was misused plays a focal role in The Great Hack: Netflix’s documentary tackling the Facebook-Cambridge Analytica data scandal, quipped that perhaps a follow up would be more punitive for the company than the $5BN FTC fine released the same day.

The documentary — which we previewed ahead of its general release Wednesday — does an impressive job of articulating for a mainstream audience the risks for individuals and society of unregulated surveillance capitalism, despite the complexities involved in the invisible data ‘supply chain’ that feeds the beast. Most obviously by trying to make these digital social emissions visible to the viewer — as mushrooming pop-ups overlaid on shots of smartphone users going about their everyday business, largely unaware of the pervasive tracking it enables.

Facebook is unlikely to be a fan of the treatment. In its own crisis PR around the Cambridge Analytica scandal it has sought to achieve the opposite effect; making it harder to join the data-dots embedded in its ad platform by seeking to deflect blame, bury key details and bore reporters and policymakers to death with reams of irrelevant detail — in the hope they might shift their attention elsewhere.

Data protection itself isn’t a topic that naturally lends itself to glamorous thriller treatment, of course. No amount of slick editing can transform the close and careful scrutiny of political committees into seat-of-the-pants viewing for anyone not already intimately familiar with the intricacies being picked over. And yet it’s exactly such thoughtful attention to detail that democracy demands. Without it we are all, to put it proverbially, screwed.

The Great Hack shows what happens when vital detail and context are cheaply ripped away at scale, via socially sticky content delivery platforms run by tech giants that never bothered to sweat the ethical detail of how their ad targeting tools could be repurposed by malign interests to sew social discord and/or manipulate voter opinion en mass.

Or indeed used by an official candidate for high office in a democratic society that lacks legal safeguards against data misuse.

But while the documentary packs in a lot over an almost two-hour span, retelling the story of Cambridge Analytica’s role in the 2016 Trump presidential election campaign; exploring links to the UK’s Brexit leave vote; and zooming out to show a little of the wider impact of social media disinformation campaigns on various elections around the world, the viewer is left with plenty of questions. Not least the ones Carroll repeats towards the end of the film: What information had Cambridge Analytica amassed on him? Where did they get it from? What did they use it for? — apparently resigning himself to never knowing. The disgraced data firm chose declaring bankruptcy and folding back into its shell vs handing over the stolen goods and its algorithmic secrets.

There’s no doubt over the other question Carroll poses early on the film — could he delete his information? The lack of control over what’s done with people’s information is the central point around which the documentary pivots. The key warning being there’s no magical cleansing fire that can purge every digitally copied personal thing that’s put out there.

And while Carroll is shown able to tap into European data rights — purely by merit of Cambridge Analytica having processed his data in the UK — to try and get answers, the lack of control holds true in the US. Here, the absence of a legal framework to protect privacy is shown as the catalyzing fuel for the ‘great hack’ — and also shown enabling the ongoing data-free-for-all that underpins almost all ad-supported, Internet-delivered services. tl;dr: Your phone doesn’t need to listen to if it’s tracking everything else you do with it.

The film’s other obsession is the breathtaking scale of the thing. One focal moment is when we hear another central character, Cambridge Analytica’s Brittany Kaiser, dispassionately recounting how data surpassed oil in value last year — as if that’s all the explanation needed for the terrible behavior on show.

“Data’s the most valuable asset on Earth,” she monotones. The staggering value of digital stuff is thus fingered as an irresistible, manipulative force also sucking in bright minds to work at data firms like Cambridge Analytica — even at the expense of their own claimed political allegiances, in the conflicted case of Kaiser.

If knowledge is power and power corrupts, the construction can be refined further to ‘data corrupts’, is the suggestion.

The filmmakers linger long on Kaiser which can seem to humanize her — as they show what appear vulnerable or intimate moments. Yet they do this without ever entirely getting under her skin or allowing her role in the scandal to be fully resolved.

She’s often allowed to tell her narrative from behind dark glasses and a hat — which has the opposite effect on how we’re invited to perceive her. Questions about her motivations are never far away. It’s a human mystery linked to Cambridge Analytica’s money-minting algorithmic blackbox.

Nor is there any attempt by the filmmakers to mine Kaiser for answers themselves. It’s a documentary that spotlights mysteries and leaves questions hanging up there intact. From a journalist perspective that’s an inevitable frustration. Even as the story itself is much bigger than any one of its constituent parts.

It’s hard to imagine how Netflix could commission a straight up sequel to The Great Hack, given its central framing of Carroll’s data quest being combined with key moments of the Cambridge Analytica scandal. Large chunks of the film are comprised from capturing scrutiny and reactions to the story unfolding in real-time.

But in displaying the ruthlessly transactional underpinnings of social platforms where the world’s smartphone users go to kill time, unwittingly trading away their agency in the process, Netflix has really just begun to open up the defining story of our time.

Facebook ignored staff warnings about “sketchy” Cambridge Analytica in September 2015

Facebook employees tried to alert the company about the activity of Cambridge Analytica as early as September 2015, per the SEC’s complaint against the company which was published yesterday.

This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a UK parliamentary committee last year.

Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.

In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply”.

Screenshot 2019 07 25 at 11.43.17

Amid a flurry of major headlines for the company yesterday, including a $5BN FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100M to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.

This tidbit was slipped out towards the end of a lengthy blog post by Facebook general counsel Colin Stretch which focused on responding to the FTC order with promises to turn over a new leaf on privacy.

CEO Mark Zuckerberg also made no mention of the SEC settlement in his own Facebook note about what he dubbed a “historic fine”.

As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘get out of jail’ card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.

‘Historic fine’ is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.

And by nesting its disclosure of the SEC settlement inside effusive privacy-washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal which highlight ongoing inconsistencies and contradictions to put it politely.

The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.

Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service”, and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015”, adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”

Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — which, as Facebook has sought to pain it, is the ‘rogue’ developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.

This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.

Fast forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.

The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?

Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)

At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.

Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).

Screenshot 2019 07 25 at 12.02.10

What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.

More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it an “historic fine”.

We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”

We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company his company was set up to sell Facebook data to; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for ‘quantitative researcher’ on LinkedIn’s platform returns more than 177,000 results of professional who are using the descriptor in their profiles.)

Earlier this month a UK parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.

The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.

A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.

Under questioning in Congress last year, Facebook founder Zuckerberg also personally told congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.

Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.

There are more awkward details in the SEC complaint that Facebook seems keen to bury too — including that as part of a signed settlement agreement, GSR’s other co-founder Aleksandr Kogan told it in June 2016 that he had, in addition to transferring modelled personality profile data on 30M Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.

This US Facebook user data included personal information such as names, location, birthdays, gender and a sub-set of page likes.

Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…

Screenshot 2019 07 25 at 12.40.29

The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.

The UK’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.

The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.

Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.

“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.

The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.

“Despite Facebook’s suspicions about Cambridge and the red flags raised after the Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.

‘The Great Hack’: Netflix doc unpacks Cambridge Analytica, Trump, Brexit and democracy’s death

It’s perhaps not for nothing that The Great Hack – the new Netflix documentary about the connections between Cambridge Analytica, the US election and Brexit, out on July 23 – opens with a scene from Burning Man. There, Brittany Kaiser, a former employee of Cambridge Analytica, scrawls the name of the company onto a strut of ‘the temple’ that will eventually get burned in that fiery annual ritual. It’s an apt opening.

There are probably many of us who’d wish quite a lot of the last couple of years could be thrown into that temple fire, but this documentary is the first I’ve seen to expertly unpick what has become the real-world dumpster fire that is social media, dark advertising and global politics which have all become inextricably, and, often fatally, combined.

The documentary is also the first that you could plausibly recommend those of your relatives and friends who don’t work in tech, as it explains how social media – specifically Facebook – is now manipulating our lives and society, whether we like it or not.

As New York Professor David Carroll puts it at the beginning, Facebook gives “any buyer direct access to my emotional pulse” – and that included political campaigns during the Brexit referendum and the Trump election. Privacy campaigner Carroll is pivotal to the film’s story of how our data is being manipulated and essentially kept from us by Facebook.

The UK’s referendum decision to leave the European Union, in fact, became “the petri dish” for a Cambridge Analytica experiment, says Guardian journalist Carole Cadwalladr She broke the story of how the political consultancy, led by Eton-educated CEO Alexander Nix, applied techniques normally used by ‘psyops’ operatives in Afghanistan to the democratic operations of the US and UK, and many other countries, over a chilling 20+ year history. Watching this film, you literally start to wonder if history has been warped towards a sickening dystopia.

carole

The petri-dish of Brexit worked. Millions of adverts, explains the documentary, targeted individuals, exploiting fear and anger, to switch them from ‘persuadables’, as CA called them, into passionate advocates for, first Brexit in the UK, and then Trump later on.

Switching to the US, the filmmakers show how CA worked directly with Trump’s “Project Alamo” campaign, spending a million dollars a day on Facebook ads ahead of the 2016 election.

The film expertly explains the timeline of how CA had first worked off Ted Cruz’s campaign, and nearly propelled that lack-luster candidate into first place in the Republican nominations. It was then that the Trump campaign picked up on CA’s military-like operation.

After loading up the psychographic survey information CA had obtained from Aleksandr Kogan, the Cambridge University academic who orchestrated the harvesting of Facebook data, the world had become their oyster. Or, perhaps more accurately, their oyster farm.

Back in London, Cadwalladr notices triumphant Brexit campaigners fraternizing with Trump and starts digging. There is a thread connecting them to Breitbart owner Steve Bannon. There is a thread connecting them to Cambridge Analytica. She tugs on those threads and, like that iconic scene in ‘The Hurt Locker’ where all the threads pull-up unexploded mines, she starts to realize that Cambridge Analytica links them all. She needs a source though. That came in the form of former employee Chris Wylie, a brave young man who was able to unravel many of the CA threads.

But the film’s attention is often drawn back to Kaiser, who had worked first on US political campaigns and then on Brexit for CA. She had been drawn to the company by smooth-talking CEO Nix, who begged: “Let me get you drunk and steal all of your secrets.”

But was she a real whistleblower? Or was she trying to cover her tracks? How could someone who’d worked on the Obama campaign switch to Trump? Was she a victim of Cambridge Analytica, or one of its villains?

British political analyst Paul Hilder manages to get her to come to the UK to testify before a parliamentary inquiry. There is high drama as her part in the story unfolds.

Kaiser appears in various guises which vary from idealistically naive to stupid, from knowing to manipulative. It’s almost impossible to know which. But hearing about her revelation as to why she made the choices she did… well, it’s an eye-opener.

brit

Both she and Wylie have complex stories in this tale, where not everything seems to be as it is, reflecting our new world, where truth is increasingly hard to determine.

Other characters come and go in this story. Zuckerburg makes an appearance in Congress and we learn of the casual relationship Facebook had to its complicity in these political earthquakes. Although if you’re reading TechCrunch, then you will probably know at least part of this story.

Created for Netflix by Jehane Noujaim and Karim Amer, these Egyptian-Americans made “The Square”, about the Egyptian revolution of 2011. To them, the way Cambridge Analytica applied its methods to online campaigning was just as much a revolution as Egyptians toppling a dictator from Cario’s iconic Tahrir Square.

For them, the huge irony is that “psyops”, or psychological operations used on Muslim populations in Iraq and Afghanistan after the 9/11 terrorist attacks ended up being used to influence Western elections.

Cadwalladr stands head and shoulders above all as a bastion of dogged journalism, even as she is attacked from all quarters, and still is to this day.

What you won’t find out from this film is what happens next. For many, questions remain on the table: What will happen now Facebook is entering Cryptocurrency? Will that mean it could be used for dark election campaigning? Will people be paid for their votes next time, not just in Likes? Kaiser has a bitcoin logo on the back of her phone. Is that connected? The film doesn’t comment.

But it certainly unfolds like a slow-motion car crash, where democracy is the car and you’re inside it.

Facebook accused of contradicting itself on claims about platform policy violations

Prepare your best * unsurprised face *: Facebook is being accused of contradicting itself in separate testimonies made on both sides of the Atlantic.

The chair of a UK parliamentary committee which spent the lion’s share of last year investigating online disinformation, going on to grill multiple Facebook execs as part of an enquiry that coincided with a global spotlight being cast on Facebook as a result of the Cambridge Analytica data misuse scandal, has penned another letter to the company — this time asking which versions of claims it has made regarding policy-violating access to data by third party apps on its platform are actually true.

In the letter, which is addressed to Facebook global spin chief and former UK deputy prime minister Nick Clegg, Damian Collins cites paragraph 43 of the Washington DC Attorney General’s complaint against the company — which asserts that the company “knew of other third party applications [i.e. in addition to the quiz app used to siphon data off to Cambridge Analytica] that similarly violated its Platform Policy through selling or improperly using consumer data”, and also failed to take “reasonable measures” to enforce its policy.

The Washington, D.C. Attorney General, Karl Racine, is suing Facebook for failing to protect user data — per allegations filed last December.

Collins’ letter notes Facebook’s denial of the allegations in paragraph 43 — before raising apparently contradictory evidence the company gave the committee last year on multiple occasions, such as the testimony of its CTO Mike Schroepfer, who confirmed it is reviewing whether Palantir improperly used Facebook data, among “lots” of other apps of concern; and testimony by Facebook’s Richard Allen to an international grand committee last November when the VP of EMEA public policy claimed the company has “taken action against a number of applications that failed to meet our policies”.

The letter also cites evidence contained in documents the DCMS committee seized from Six4Three, pertaining to a separate lawsuit against Facebook, which Collins asserts demonstrate “the lax treatment of abusive apps and their developments by Facebook”.

He also writes that these documents show Facebook had special agreements with a number of app developers — that allowed some preinstalled apps to “circumvent users’ privacy settings or platform settings, and to access friends’ information”, as well as noting that Facebook whitelisted some 5,200 apps “according to our evidence”.

“The evidence provided by representatives of Facebook to this Select committee and the International Grand Committee as well as the Six4Three files directly contradict with Facebook’s answer to Paragraph 43 of the complaint filed against Facebook by the Washington, D.C. Attorney General,” he writes.

“If the version of events presented in the answer to the lawsuit is correct, this means the evidence given to this Committee and the International Grand Committee was inaccurate.”

Collins goes on to ask Facebook to “confirm the truthfulness” of the evidence given by its reps last year, and to provide the list of applications removed from its platform in response to policy violations — which, in November, Allan promised to provide the committee with but has so far failed to do so.

We’ve also reached out to Facebook to ask which of the versions of events it’s claimed are true is the one it’s standing by at this time.

Facebook reportedly gets a $5 billion slap on the wrist from the FTC

The U.S. Federal Trade Commission has reportedly agreed to end its latest probe into Facebook‘s privacy problems with a $5 billion payout.

According to The Wall Street Journal, the 3-2, party-line vote by FTC commissioners was carried by the Republican majority and will be moved to the Justice Department’s civil division to be finalized.

A $5 billion payout seems like a significant sum, but Facebook had already set aside $3 billion to cover the cost of the settlement and the company could likely make up the figure in less than a quarter of revenue (the company’s revenue for the last fiscal quarter was roughly $15 billion). Indeed, Facebook said in April that it expected to pay up to $5 billion to end the government’s probe.

The settlement will also include government restrictions on how Facebook treats user privacy, according to the Journal.

We have reached out to the FTC and Facebook for comment and will update this story when we hear back.

Ultimately, the partisan divide which held up the settlement broke down with Republican members of the commission overriding Democratic concerns for greater oversight of the social media giant.

Lawmakers have been calling consistently for greater regulatory oversight of Facebook — and even a legislative push to break up the company — since the revelation of the company’s mishandling of the private data of millions of Facebook users during the run up to the 2016 presidential election, which wound up being collected improperly by Cambridge Analytica.

Specifically the FTC was examining whether the data breach violated a 2012 consent decree which saw Facebook committing to engage in better privacy protection of user data.

Facebook’s woes didn’t end with Cambridge Analytica . The company has since been on the receiving end of a number of exposes around the use and abuse of its customers’ information and comes as calls to break up the big tech companies have only grown louder.

The settlement could also be a way for the company to buy its way out of more strict oversight as it faces investigations into its potentially anti-competitive business practices and inquiries into its launch of a new cryptocurrency — Libra — which is being touted as an electronic currency for Facebook users largely divorced from governmental monetary policy.

Potential sanctions proposed by lawmakers for the FTC were reported to include the possibility of elevating privacy oversight to the company’s board of directors and potentially the deletion of tracking data; restricting certain information collection; limiting ad targeting; and restricting the flow of user data among different Facebook business units.

Italy stings Facebook with $1.1M fine for Cambridge Analytica data misuse

Italy’s data protection watchdog has issued Facebook with a €1 million (~$1.1M) fine for violations of local privacy law attached to the Cambridge Analytica data misuse scandal.

Last year it emerged that up to 87 million Facebook users had had their data siphoned out of the social media giant’s platform by an app developer working for the controversial (and now defunct) political data company, Cambridge Analytica.

The offences in question occurred prior to Europe’s tough new data protection framework, GDPR, coming into force — hence the relatively small size of the fine in this case, which has been calculated under Italy’s prior data protection regime. (Whereas fines under GDPR can scale as high as 4% of a company’s annual global turnover.)

We’ve reached out to Facebook for comment.

Last year the UK’s DPA similarly issued Facebook with a £500k penalty for the Cambridge Analytica breach, although Facebook is appealing.

The Italian regulator says 57 Italian Facebook users downloaded Dr Aleksandr Kogan‘s Thisisyourdigitallife quiz app, which was the app vehicle used to scoop up Facebook user data en masse — with a further 214,077 Italian users’ also having their personal information processed without their consent as a result of how the app could access data on each user’s Facebook friends.

In an earlier intervention in March, the Italian regulator challenged Facebook over the misuse of the data — and the company opted to pay a reduced amount of €52,000 in the hopes of settling the matter.

However the Italian DPA has decided that the scale of the violation of personal data and consent disqualifies the case for a reduced payment — so it has now issued Facebook with a €1M fine.

The sum takes into account, in addition to the size of the database, also the economic conditions of Facebook and the number of global and Italian users of the company,” it writes in a press release on its website [translated by Google Translate]. 

At the time of writing its full decision on the case was not available.

Late last year the Italian regulator fined Facebook €10M for misleading users over its sign in practices.

While, in 2017, it also slapped the company with a €3M penalty for a controversial decision to begin helping itself to WhatsApp users’ data — despite the latter’s prior claims that user data would never be shared with Facebook.

Going forward, where Facebook’s use (and potential misuse) of Europeans’ data is concerned, all eyes are on the Irish Data Protection Commission; aka its lead regulator in the region on account of the location of Facebook’s international HQ.

The Irish DPC has a full suite of open investigations into Facebook and Facebook-owned companies — covering major issues such as security breaches and questions over the legal basis it claims to process people’s data, among a number of other big tech related probes.

The watchdog has suggested decisions on some of this tech giant-related case-load could land this summer.