Tag Archives: Damian Collins

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Facebook accused of contradicting itself on claims about platform policy violations

Prepare your best * unsurprised face *: Facebook is being accused of contradicting itself in separate testimonies made on both sides of the Atlantic.

The chair of a UK parliamentary committee which spent the lion’s share of last year investigating online disinformation, going on to grill multiple Facebook execs as part of an enquiry that coincided with a global spotlight being cast on Facebook as a result of the Cambridge Analytica data misuse scandal, has penned another letter to the company — this time asking which versions of claims it has made regarding policy-violating access to data by third party apps on its platform are actually true.

In the letter, which is addressed to Facebook global spin chief and former UK deputy prime minister Nick Clegg, Damian Collins cites paragraph 43 of the Washington DC Attorney General’s complaint against the company — which asserts that the company “knew of other third party applications [i.e. in addition to the quiz app used to siphon data off to Cambridge Analytica] that similarly violated its Platform Policy through selling or improperly using consumer data”, and also failed to take “reasonable measures” to enforce its policy.

The Washington, D.C. Attorney General, Karl Racine, is suing Facebook for failing to protect user data — per allegations filed last December.

Collins’ letter notes Facebook’s denial of the allegations in paragraph 43 — before raising apparently contradictory evidence the company gave the committee last year on multiple occasions, such as the testimony of its CTO Mike Schroepfer, who confirmed it is reviewing whether Palantir improperly used Facebook data, among “lots” of other apps of concern; and testimony by Facebook’s Richard Allen to an international grand committee last November when the VP of EMEA public policy claimed the company has “taken action against a number of applications that failed to meet our policies”.

The letter also cites evidence contained in documents the DCMS committee seized from Six4Three, pertaining to a separate lawsuit against Facebook, which Collins asserts demonstrate “the lax treatment of abusive apps and their developments by Facebook”.

He also writes that these documents show Facebook had special agreements with a number of app developers — that allowed some preinstalled apps to “circumvent users’ privacy settings or platform settings, and to access friends’ information”, as well as noting that Facebook whitelisted some 5,200 apps “according to our evidence”.

“The evidence provided by representatives of Facebook to this Select committee and the International Grand Committee as well as the Six4Three files directly contradict with Facebook’s answer to Paragraph 43 of the complaint filed against Facebook by the Washington, D.C. Attorney General,” he writes.

“If the version of events presented in the answer to the lawsuit is correct, this means the evidence given to this Committee and the International Grand Committee was inaccurate.”

Collins goes on to ask Facebook to “confirm the truthfulness” of the evidence given by its reps last year, and to provide the list of applications removed from its platform in response to policy violations — which, in November, Allan promised to provide the committee with but has so far failed to do so.

We’ve also reached out to Facebook to ask which of the versions of events it’s claimed are true is the one it’s standing by at this time.

Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office. The paper can be read in full here (PDF).

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse (which will be covered by further stringent requirements under the plan).

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although the newspaper reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, ending July 1, after which it says it will set out the action it will take in developing its final proposals for legislation.

“Following the publication of the Government Response to the consultation, we will bring forward legislation when parliamentary time allows,” it adds.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any legislative gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own — at least, for now.

The House of Lords committee was another parliamentary body that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”.

And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle. But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

Facebook refuses to disclose ‘chuck Chequers’ Brexit advertiser to UK parliament

Facebook has refused to provide the British parliament with the names of individuals behind a shadowy network backing an extreme ‘no deal’ Brexit outcome over a government-negotiated compromise.

Since the June 2016 EU referendum vote, politics in the UK has been consumed by the question of how to implement a close vote to leave.

And last year the UK’s Electoral Commission confirmed the vote was tarnished by in influx of dark money ploughed into social media ads — with platforms such as Facebook offering an unregulated route for circumventing democratic norms.

Nor have the Brexit ads stopped since the referendum.

An unknown group, called ‘Mainstream Network’, ran a series of political ads on Facebook’s platform last year which targeted voters in key leave voting constituencies urging them to pressure on their member of parliament not to support the prime minister’s approach to seek a withdrawal deal from the EU.

Such a deal would allow the UK to leave the bloc more smoothly, with more contingencies in place to cover the exit. But legally, if no deal (and/or no extension to Article 50) is agreed before the end of this month the UK could just ‘crash out’ of the EU without any such safety net.

Unknown entities have been using Facebook’s platform to push for exactly that to happen — by paying Facebook to target leave voters with anti-Brexit-deal ads (which included the line “chuck Chequers”; a reference to the prime minister’s Brexit deal).

Last year research commissioned by a UK parliamentary committee as part of an enquiry into political advertising online spotlit the existence of Mainstream Network, estimating the unknown Facebook advertiser had spent ~£257,000 in just over 10 months.

Its Facebook pages were said to have reached between 10M and 11M people on the platform.

Mainstream Network also operated a ‘news’ website, where whoever was behind it curated pro-Brexit content and advocated for no deal being “better than partition or permanent vassalage”.

Last November Facebook policy VP Richard Allan faced questions from the DCMS committee about who is behind the ‘Mainstream Network’ Brexit ads running on Facebook.

DCMS chair Damian Collins asked for Facebook to provide details of the accounts behind Mainstream Network or if it would not to provide a reason for not disclosing the information.

Yesterday the committee published Facebook’s refusal to provide the information to the DCMS committee. Though it said it has passed some information to the UK’s data watchdog, the Information Commissioner’s Office (ICO).

“The Committee asked about an advertiser on our platform called Mainstream Network. As I noted at the time, in the event that Facebook receives a request for personal data from an entity which can legally require such information, Facebook will provide information in line with normal procedures. You will appreciate that it would be inappropriate to provide personal data of our users to any third party absent a lawful basis for such disclosure,” writes Allan.

He goes on to say that Facebook has provided “information” about Mainstream Network to the UK’s data watchdog “on a private and confidential basis”.

The ICO is investigating the advertiser as part of a wider probe into the use of social media for political campaigning.

“It is now a matter for ICO (acting in accordance with its statutory duties) to determine what they will do with the data provided to them,” Allan adds.

We reached out to the ICO to ask whether it intends to disclose the names.

“We received a response from Facebook to an Information Notice issued by the ICO. The information is under review and forms part of our ongoing investigation into the use of data analytics for political purposes,” a spokeswoman told us.

Last summer information commissioner Elizabeth Denham called for an ethical pause of the use of social media tools for political ads — saying she was concerned about the lack of transparency and the knock-on impact that could have on democracy.

In recent years Facebook has been busy making loud crisis PR noises about how it’s ‘increasing the transparency’ around advertisers on its platform — ever since the 2016 US presidential election disinformation scandal blew up, and it emerged quite how many Roubles Facebook had been accepting to allow divisive Kremlin ads to target US voters.

The company launched ‘political ad transparency’ measures in the U.S. initially, including a requirement for election advertisers to verify they are US-based.

It has also since rolled out some similar measures in some international markets — including in the U.K. where it introduced a system for disclosing political ads last fall. (Though it quickly had to rework the system after it was shown being trivially easy to spoof.)

Allan appears to intend to reference the latter measures in the concluding portion of his letter, albeit he gets the date wrong by a full year.

“I further note that as of 29 November 2019 [sic], we have required political advertisers to consent to the publication of additional information in the form of a disclaimer that they create when they go through the authorisation process. All political advertisements along with these disclaimers are made available to the public in our Ad Library,” he writes, without making it clear why that should mean Facebook can’t disclose the identities behind Mainstream Network.

The company has claimed to be working towards having a “global system” for political ad transparency.

But the reality on the ground remains highly variable, piecemeal and very far from perfect full transparency.

Nor will Facebook even come clean with the public when specifically asked to do so by policymakers, as its refusal to the DCMS shows.

Responding to the company’s letter in a series of tweets, Collins writes: “I believe there is a strong public interest in understanding who is behind the Mainstream Network, and that this information should be published. People have a right to know how is targeting them with political advertisements and why.”

It remains to be seen whether the ICO will release the information Facebook has provided it.

The watchdog has also so far declined to disclose the identities of several senior Facebook executives who knew about another political ad scandal — the Cambridge Analytica data breach — earlier than the company had publicly claimed it knew.

The DCMS committee published its final report into online disinformation last month, setting out a laundry list of recommendations for cleaning up political campaigning in the digital era.

The report also includes the tidbit about the trio of senior Facebook managers who knew about the Cambridge Analytica breach sooner than Zuckerberg himself (but apparently did not think to share the incident with the CEO), as well as singling out Facebook for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

The committee’s report also calls for privacy and antitrust regulators to investigate the company.

One more thing re: “privacy concerns” raised by the DCMS fake new report…

A meaty first report by the UK parliamentary committee that’s been running an inquiry into online disinformation since fall 2017, including scrutinizing how people’s personal information was harvested from social media services like Facebook and used for voter profiling and the targeting of campaign ads — and whose chair, Damian Collins — is a member of the UK’s governing Conservative Party, contains one curious omission.

Among the many issues the report raises are privacy concerns related to a campaign app developed by a company called uCampaign — which, much like the scandal-hit (and now seemingly defunct) Cambridge Analytica, worked for both the Ted Cruz for President and the Donald J Trump for President campaigns — although in its case it developed apps for campaigns to distribute to supporters to gamify digital campaigning via a tool which makes it easy for them to ‘socialize’ (i.e. share with contacts) campaign messaging and materials.

The committee makes a passing reference to uCampaign in a section of its report which deals with “data targeting” and the Cambridge Analytica Facebook scandal, specifically — where it writes [emphasis ours]:

There have been data privacy concerns raised about another campaign tool used, but not developed, by AIQ [Aggregate IQ: Aka, a Canadian data firm which worked for Cambridge Analytica and which remains under investigation by privacy watchdogs in the UK, Canada and British Columbia]. A company called uCampaign has a mobile App that employs gamification strategy to political campaigns. Users can win points for campaign activity, like sending text messages and emails to their contacts and friends. The App was used in Donald Trump’s presidential campaign, and by Vote Leave during the Brexit Referendum.

The developer of the uCampaign app, Vladyslav Seryakov, is an Eastern Ukrainian military veteran who trained in computer programming at two elite Soviet universities in the late 1980s. The main investor in uCampaign is the American hedge fund magnate Sean Fieler, who is a close associate of the billionaire backer of SCL and Cambridge Analytica, Robert Mercer. An article published by Business Insider on 7 November 2016 states: “If users download the App and agree to share their address books, including phone numbers and emails, the App then shoots the data [to] a third-party vendor, which looks for matches to existing voter file information that could give clues as to what may motivate that specific voter. Thomas Peters, whose company uCampaign created Trump’s app, said the App is “going absolutely granular”, and will—with permission—send different A/B tested messages to users’ contacts based on existing information.”

What’s curious is that Collins’ Conservative Party also has a campaign app built by — you guessed it! — uCampaign, which the party launched in September 2017.

While there is nothing on the iOS and Android app store listings for the Conservative Campaigner app to identify uCampaign as its developer, if you go directly to uCampaign’s website the company lists the UK Conservative Party as one of it’s clients — alongside other rightwing political parties and organizations such as the (pro-gun) National Rife Association; the (anti-abortion) SBA List; and indeed the UK’s Vote Leave (Brexit) campaign, (the latter) as the DCMS report highlights.

uCampaign’s involvement as the developer of the Conservative Campaigner app was also confirmed to us (in June) by the (now former) deputy director & head of digital strategy for The Conservative Party, Anthony Hind, who — according to his LinkedIn profile — also headed up the party’s online marketing, between mid 2015 and, well, the middle of this month.

But while, in his initial response to us, Hind readily confirmed he was personally involved in the procurement of uCampaign as the developer of the Conservative Campaigner app, he failed to respond to any of our subsequent questions — including when we raised specific concerns about the privacy policy that the app had been using, prior to May 23 (just before the EU’s new GDPR data protection framework came into force on May 25 — a time when many apps updated their privacy polices as a compliance precaution related to the new data protection standard).

Since May 23 the privacy policy for the Conservative Campaigner app has pointed to the Conservative Party’s own privacy policy. However prior to May 23 the privacy policy was a literal (branded) copy-paste of uCampaign’s own privacy policy. (We know because we were tipped to it by a source — and verified this for ourselves.)

Here’s a screengrab of the exchange we had with Hind over LinkedIn — including his sole reply:

What looks rather awkward for the Conservative Party — and indeed for Collins, as DCMS committee chair, given the valid “privacy concerns” his report has raised around the use (and misuse/abuse) of data for political targeting — is that uCampaign’s privacy policy has, shall we say, a verrrrry ‘liberal’ attitude to sharing the personal data of app users (and indeed of any of their contacts it would have been able to harvest from their devices).

Here’s a taster of the data-sharing permissions this U.S. company affords itself over its clients’ users’ data [emphasis ours] — according to its own privacy policy:

CAMPAIGNS YOU SUPPORT AND ALIGNED ORGANIZATIONS

We will share your Personal Information with third party campaigns selected by you via the Platform. In addition, we may share your Personal Information with other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us.

UCAMPAIGN FRIENDS

We may share your Personal Information with other users of the Platform, for example if they connect their address book to our services, or if they invite you to use our services via the Platform.

BUSINESS TRANSFERS

We may share your Personal Information with other entities affiliated with us for internal reasons, primarily for business and operational purposes. uCampaign, or any of its assets, including the Platform, may be sold, or other transactions may occur in which your Personal Information is one of the business assets of the transaction. In such case, your Personal Information may be transferred.

To spell it out, the Conservative Party paid for a campaign app that could, according to the privacy policy it had in place prior to May 23, have shared supporters’ personal data with organizations that uCampaign’s owners — who the DCMS committee states have close links to “the billionaire backer of SCL and Cambridge Analytica, Robert Mercer” — view as ideologically affiliated with their objectives, whatsoever those entities might be.

Funnily enough, the Conservative Party appears to have tried to scrub out some of its own public links to uCampaign — such as changing link for the developer website on the app listing page for the Conservative Campaigner app to the Conservative Party’s own website (whereas before it linked through to uCampaign’s own website).

As the veteran UK satirical magazine Private Eye might well say — just fancy that! 

One of the listed “features” of the Conservative Campaigner app urges Tory supporters to: “Invite your friends to join you on the app!”. If any did, their friends’ data would have been sucked up by uCampaign too to further causes of its choosing.

The version of the Campaigner app listed on Google Play is reported to have 1,000+ installs (iOS does not offer any download ranges for apps) — which, while not in itself a very large number, could represent exponentially larger amounts of personal data should users’ contacts have been synced with the app where they would have been harvested by uCampaign.

We did flag the link between uCampaign and the Conservative Campaigner app directly to the DCMS committee’s press office — ahead of the publication of its report, on June 12, when we wrote:

The matter of concern here is that the Conservative party could itself be an unwitting a source of targeting data for rival political organizations, via an app that appears to offer almost no limits on what can be done with personal data.
Prior to the last update of the Conservative Campaigner app the privacy policy was simply the boilerplate uCampaign T&Cs — which allow the developer to share app users personal info (and phone book contacts) with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.
That’s incredibly wide-ranging.
So every user’s phone book contacts (potentially hundreds of individuals per user) could have been passed to multiple unidentified organizations without people’s knowledge or consent. (Other uCampaign apps have been built for the NRA, and for anti-abortion organizations, for example.)
uCampaign‘s T&Cs are here: https://ucampaignapp.com/privacy.html
Even the current T&Cs allow for sharing with US suppliers.
Given the committee’s very public concerns about access to people’s data for political targeting purposes I am keen to know whether Mr Collins has any concerns about the use of uCampaign‘s app infrastructure by the Conservative party?
And also whether he is concerned about the lack of a robust data protection policy by his own party to ensure that valuable membership data is not simply passed around to unknown and unconnected entities — perhaps abroad, perhaps not — with zero regard for or accountability to the individuals in question.

Unfortunately this email (and a follow up) to the DCMS committee, asking for a response from Collins to our privacy concerns, went unanswered.

It’s also worth noting that the Conservative Party’s own privacy policy (which it’s now using for its Campaigner app) is pretty generous vis-a-vis the permissions it’s granting itself over sharing supporters’ data — including stating that it shares data with

  • The wider Conservative Party
  • Business associates and professional advisers
  • Suppliers
  • Service providers
  • Financial organisations – such as credit card payment providers
  • Political organisations
  • Elected representatives
  • Regulatory bodies
  • Market researchers
  • Healthcare and welfare organisations
  • Law enforcement agencies

The UK’s data watchdog recently found fault with pretty much all of the UK political parties’ when it comes to handling of voter data — saying it had sent warning letters to 11 political parties and also issued notices compelling them to agree to audits of their data protection practices.

Safe to say, it’s not just private companies that have been sticking their hand in the personal data cookie jar in recent years — the political establishment is facing plenty of awkward questions as regulators unpick where and how data has been flowing.

This is also not the only awkward story re: data privacy concerns related to a Tory political app. Earlier this year the then-minister in charge of the digital brief, Matt Hancock, launched a self-promotional, self-branded app intended for his constituents to keep up with news about Matt Hancock MP.

However the developers of the app (Disciple Media) initially uploaded the wrong privacy policy — and were forced to issue an amended version which did not grant the minister such non-specific and oddly toned rights to users’ data — such as that the app “may disclose your personal information to the Publisher, the Publisher’s management company, agent, rights image company, the Publisher’s record label or publisher (as applicable) and any other third parties, for use in conjunction with additional user promotions or offers they may run from time to time or in relation to the sale of other goods and services”.

Of course the Matt Hancock App was a PR initiative of (and funded by) an individual Conservative MP — rather than a formal campaign tool paid for by the Conservative Party and intended for use by hundreds (or even thousands) of Party activists for use during election campaigns.

So while there are two issues of Tory-related privacy concern here, only one loops back to the Conservative Party political organization itself.

Facebook finally hands over leave campaign Brexit ads

The UK parliament has provided another telling glimpse behind the curtain of Facebook’s unregulated ad platform by publishing data on scores of pro-Brexit adverts which it distributed to UK voters during the 2016 referendum on European Union membership. The ads were run on behalf of several vote leave campaigns who paid a third company to use Facebook’s ad targeting tools.

The ads were run prior to Facebook having any disclosure rules for political ads. So there was no way for anyone other than each target recipient to know a particular ad existed or who it was being targeted at.

The targeting of the ads was carried out on Facebook’s platform by AggregateIQ, a Canadian data firm that has been linked to Cambridge Analytica/SCL — aka the political consultancy at the center of a massive Facebook data misuse storm, including by Facebook itself, which earlier this year told the UK parliament it had found billing and administration connections between the two.

Aggregate IQ is now under joint investigation by Canadian data watchdogs. But in 2016 the data firm was paid £3.5M by a number of Brexit supporting campaigns to spend on targeted social media advertising using Facebook as the primary conduit.

Facebook was asked by the UK parliament’s DCMS committee to disclose the Brexit ads — as part of its multi-month enquiry investigating fake news and the impact of online disinformation on democratic processes. The company eventually did so, releasing ads run by AIQ for the official Vote Leave campaign, BeLeave/Brexit Central, and DUP Vote Leave.

Several of the Brexit campaigns whose ads have now been made public were also recently found to have broken UK election law by breaching campaign spending limits. Most notably the Electoral Commission found that the youth-focused campaign, BeLeave, had been joint-working with the official Vote Leave campaign — yet the pair had not jointly declared spending thereby enabling the official campaign to overspend by almost half a million pounds. And that overspend went straight to Aggregate IQ to run targeted Facebook ads.

The committee has now published the Brexit ads that Facebook disclosed to it, more than two years after the referendum vote took place. Facebook also provided it with ad impression ranges and some targeting data which it has also published. The committee’s enquiry remains ongoing.

In a letter to the committee, Facebook says it’s unable to disclose ads run by AIQ for another Brexit campaign, Veterans for Britain, saying that campaign “has not permitted us to disclose that information to you”. So the view of the Brexit political ads we’re finally getting is by no means complete. Facebook’s platform also essentially enables anyone to be an advertiser — so it’s entirely possible other Brexit related messages were distributed using its ad tools.

In the case of the Brexit ads run by AIQ specifically, it’s not clear how many ad impressions they racked up in all. But total impressions look very sizable.

While some of what runs to many thousands of distinctly targeted ads which AIQ distributed via Facebook’s platform are listed as only garnering between 0-999 impressions apiece, according to Facebook’s data, others racked up far more views. Commonly listed ranges include 50,000 to 99,999 and 100,000 to 199,999 — with even higher ranges like 2M-4.9M and 5M-9.9M also listed.

One ad that generated ad impressions of between 2M-4.9M was targeted almost exclusively (99%) at English Facebook users — and included the claim that: “EU protectionism has prevented our generation from benefiting from key global trade deals. It is time we unite to give our country the freedom to be a prosperous and competitive nation!”

A spokesperson for the DCMS committee told us it hadn’t had a chance to compiled the thousands of ad impression ranges into a total ad impression range — but had rather published the data as it had received it from Facebook. We’ve also asked the company to prove an estimate on the total ad impressions and will update this story with any response.

The ad creative used by these campaigns has been published as well and — across all of them — the adverts display a mixture of (roundly debunked) claims about suddenly being able to spend ‘£350M a week on the NHS’, rousing calls to ‘take back control’ (including a bunch of ‘hero’ shots of Boris Johnson), coupled with ample fearmongering about EU regulations ‘holding the UK back’ or posing a risk to UK jobs and wages; plus a lot of out-and-out ‘project fear’ messaging — with the official Vote Leave campaign especially deploying direct dogwhistle racism to stir up fear among voters about foreigners coming to the UK if it can’t control its own border or if the EU expands to add more countries…

 

At a glance, the Brexit ad creative is not as ‘out there’ as some of the wilder stuff the Kremlin was pumping onto Facebook’s platform to try to skew the 2016 U.S. election.

But the blatant xenophobia leaves a very bad taste.

In the case of Brexit Central/BeLeave, their ad creative was more subtle in its xenophobia — urging target recipients to back a “fair immigration system” or an “Australian-style points based system” but without making any direct references to any specific non-EU countries.

The youth campaign also created a couple of ads (below) which invoked consumer technology as a reason to back Brexit — with one appealing to users of ride-hailing and another to users of video streaming apps to reject the EU by suggesting its regulations might interfere with access to the services they use.

Ironically enough, it was London’s transport authority that withdrew Uber’s license to operate last year — in a regulatory decision that had absolutely nothing to do with the EU. (Uber has since appealed and got a 15-month reprieve.)

Though, also last year, the EU’s top court judged that a spade is a spade — and Uber is a transport company, not a mere technology platform. Though the ruling has not prevented Uber from continuing to operate and even expand ride-hailing services in Europe. Sure, it has to work more closely with city authorities now, but that means meshing with local priorities rather than seeking to override what local people want.

In a further irony, the EU also took steps to liberalize passenger transport services, back in 2007, issuing a directive that makes it harder for cities authorities to place their own controls on ride-hailing services. Albeit, evidently facts didn’t get a starring role in the vote leave Brexit ads.

As for quotas on streaming services, it’s a curious thing to complain about — especially to a youth-focused audience which you’re also targeting with ads claiming they’ll have better job prospects outside the EU.

The EU has merely suggested online streaming services should provide for and subsidize up to a third of their output of films and TV as being made in Europe.

Which seems unlikely to have a deleterious impact on European creative industries, given platforms would be contributing to the development of local audiovisual production. So — in plainer English — it should mean more money to support more creative jobs in Europe which many young people would probably love to have a crack at…

The publication of the Brexit ads is, above all, a reminder that online political advertising has been allowed to be a blackhole — and at times a cesspit — because cash-rich entities have been able to unaccountably exploit the obscurity of Facebook’s systemically dark ad targeting tools for their own ends, leaving no right of public objection let alone reply for the people as a whole whose lives are affected by political outcomes such as referendums.

Facebook has been making some voluntary changes to offer a degree of political ad disclosure, as it seeks to stave off regulatory rule. Whether its changes — which at best offer partial visibility — will go far enough remains to be seen. And of course they come too late to change the conversation around Brexit.

Which is why, earlier this month, the UK’s data watchdog calling for an ethical pause of political ad ops — saying there’s a risk of democracy being digitally undermined.

“It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld,” it wrote. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default.”

Cambridge Analytica’s Nix said it licensed ‘millions of data points’ from Acxiom, Experian, Infogroup to target US voters

The repeat grilling by the UK parliament’s DCMS committee today of Alexander Nix, the former CEO of the now ex company Cambridge Analytica — aka the controversial political and commercial ad agency at the center of a Facebook data misuse scandal — was not able to shed much new light on what may or may not have been going on inside the company.

But one nugget of information Nix let slip were the names of specific data aggregators he said Cambridge Analytica had bought “consumer and lifestyle” information on US voters from, to link to voter registration data it also paid to acquire — apparently using that combined database to build models to target American voters in the 2016 presidential election, rather than using data improperly obtained from Facebook.

This is more information than Cambridge Analytica has thus far disclosed to one US voter, professor David Carroll, who in January last year lodged a subject access request with the UK-based company after learning it had processed his personal information — only to be fobbed off with a partial disclosure.

Carroll persisted, and made a complaint to the UK’s data protection watchdog, and last month the ICO ordered Cambridge Analytica to provide him with all the data it held on him. The deadline for that passed yesterday — with no response.

The committee questioned Nix closely over responses he had given it at his earlier appearance in February, when he denied that Cambridge Analytica used Facebook data as the foundational data-set for its political ad targeting business.

He had instead said that the work Dr Aleksandr Kogan did for the company was “fruitless” and thus that the Facebook data Kogan had harvested and supplied to it had not been used.

“It wasn’t the foundational data-set on which we built our company,” said Nix today. “Because we went out and we licensed millions of data points on American individuals from very large reputable data aggregators and data vendors such as Acxiom, Experian, Infogroup. That was the cornerstone of our data base together with political data — voter file data, I beg your pardon — which again is commercially available in the United States. That was the cornerstone of our company and on which we continued to build the company after we realized that the GSR data was fruitless.”

“The data that Dr Kogan gave to us was modeled data and building a model on top of a model proved to be less statistically accurate… than actually just using Facebook’s own algorithms for placing advertising communications. And that was what we found out,” he added. “So I stand by that statement that I made to you before — and that was echoed and amplified in much more technical detail by Dr Kogan.”

And Kogan did indeed play down the utility of the work he did for Cambridge Analytica — claiming it was essentially useless when he appeared before the committee back in April.

Asked about the exact type of data Cambridge Analytica/SCL acquired and processed from data brokers, Nix told the committee: “This is largely — largely — consumer and lifestyle data. So this is data on, for instance, loyalty card data, transaction data, this is data that pertains to lifestyle choices, such as what car you drive or what magazines you read. It could be data on consumer habits. And together with some demographic and geographic data — and obviously the voter data, which is very important for US politics.”

We’ve asked the three data brokers named by Nix to confirm Cambridge Analytica was a client of theirs, and the types of data it licensed from them, and will update this report with any response.

Fake news committee told it’s been told fake news

What was most notable on this Nix’s second appearance in front of the DCMS committee — which is investigating the role and impact of fake news/online disinformation on the political process — were his attempts to shift the spotlight via a string of defiant denials that there was much of a scandal to see here.

He followed a Trumpian strategy of trying to cast himself (and his former company) as victims — framing the story as a liberal media conspiracy and claiming no evidence of wrongdoing or unethical behavior had been produced.

Cambridge Analytica whistleblower Chris Wylie, who Nix had almost certainly caught sight of sitting in the public gallery, was described as a “bitter and jealous” individual who had acted out of resentment and spite on account of the company’s success.

Though the committee pushed back against that characterization, pointing out that Wylie has provided ample documents backing up his testimony, and that it has also taken evidence from multiple sources — not just from one former employee.

Nix did not dispute that the Facebook data-harvesting element of the scandal had been a “debacle”, as he put it.

Though he reiterated Cambridge Analytica’s previous denial that it was ever the recipient of the full data-set Kogan acquired from Facebook — which Facebook confirmed in April consisted of information on as many as 87M of its users — saying it “only received data on about 26M-27M individuals in the USA”.

He also admitted to personally being “foolish” in what he had been caught saying to an undercover Channel 4 reporter — when he had appeared to suggest Cambridge Analytica used tactics such as honeytraps and infiltration to gain leverage against clients’ political opponents (comments that got him suspended as CEO), saying he had only been talking in hypotheticals in his “overzealousness to secure a contract” — and once again painting himself as the victim of the “skillful manipulation of a journalist”.

He also claimed the broadcaster had taken his remarks out of context, claiming too that they had heavily edited the footage to make it look worse (a claim Channel 4 phoned in to the committee to “heavily” refute during the session).

But those sole apologetic notes did not raise the the tone of profound indignation Nix struck throughout almost the entire session.

He came across as poised and well-versed in his channeled outrage. Though he has of course had plenty of time since his earlier appearance — when the story had not yet become a major scandal — to construct a version of events that could best serve to set the dial to maximum outrage.

Nix also shut down several lines of the committee’s questions, refusing to answer whether Cambridge Analytica/SCL had gone on to repeat the Facebook data-harvesting method at the heart of the scandal themselves, for example.

Nor would he disclose who the owners and shareholders of Cambridge Analytica and SCL Group are — claiming in both cases that ongoing investigations prevented him from doing so.

Though, in the case of the Information Commission’s Office’s ongoing investigation into social media analytics and political campaigning — which resulted in the watchdog raiding the offices of Cambridge Analytica in March — committee chair Damian Collins made a point of stating the ICO had assured it it has no objection to Nix answering its questions.

Nonetheless Nix declined.

He also refused to comment on fresh allegations printed in the FT suggesting he had personally withdrawn $8M from Cambridge Analytica before the company collapsed into administration.

Some answers were forthcoming when the committee pressed him on whether Aggregate IQ, a Canadian data company that has been linked to Cambridge Analytica, and which Nix described today as a “subcontractor” for certain pieces of work, had ever had access to raw data or modeled data that Cambridge Analytica held.

The committee’s likely interest in pursing that line of questioning was to try to determine whether AIQ could have gained access to the cache of Facebook user data that found its way (via Kogan) to Cambridge Analytica — and thus whether it could have used it for its own political ad targeting purposes.

AIQ received £3.5M from leave campaign groups in the run up to the UK’s 2016 EU referendum campaign, and has been described by leave campaigners as instrumental in securing their win, though exactly where it obtained data for targeting referendum ads has been a key question for the enquiry.

On this Nix said: “It wouldn’t be unusual for AIQ or Cambridge Analytica to work on a client’s data-sets… And to have access to the data whilst we were working on them. But that didn’t entitle us to have any privileges over that data or any wherewithal to make a copy or retain any of that data ourselves.

“The relationship with AIQ would not have been dissimilar to that — as a subcontractor who was brought in to assist us on projects, they would have had, possibly, access to some of the data… whether that was modeled data or otherwise. But again that would be covered by the contract relationship that we have with them.”

Though he also said he couldn’t give a concrete answer on whether or not AIQ had had access to any raw data, adding: “I did speak to my data team prior to this hearing and they assured me there was no raw data that went into the Rippon platform [voter engagement platform AIQ built for Cambridge Analytica]. I can only defer to their expertise.”

Also on this, in prior evidence to the committee Facebook said it did not believe AIQ had used the Facebook user data obtained via Kogan’s apps for targeting referendum ads because the company had used email address uploads to Facebook’s ad platform for targeting “many” of its ads during the referendum — and it said Kogan’s app had not gathered the email addresses of app installers or their friends.

(And in its evidence to the committee AIQ’s COO Jeff Silvester also claimed: “The only personal information we use in our work is that which is provided to us by our clients for specific purposes. In doing so, we believe we comply with all applicable privacy laws in each jurisdiction where we work.”)

Today Nix flat denied that Cambridge Analytica had played any role in the UK’s referendum campaign, despite the fact it was already known to have done some “scoping work” for UKIP, and which it did invoice the company for (but claims not to have been paid). Work which Nix did not deny had taken place but which he downplayed.

“We undertook some scoping work to look at these data. Unfortunately, whilst this work was being undertaken, we did not agree on the terms of a contract, as a consequence the deliverables from this work were not handed over, and the invoice was not paid. And therefore the Electoral Commission was absolutely satisfied that we did not do any work for Leave.EU and that includes for UKIP,” he said.

“At times we undertake eight, nine, ten national elections a year somewhere around the world. We’ve never undertaken an election in the UK so I stand by my statement that the UK was not a target country of interest to us. Obviously the referendum was a unique moment in international campaigning and for that reason it was more significant than perhaps other opportunities to work on political campaigns might have been which was why we explored it. But we didn’t work on that campaign either.”

In a less comfortable moment for Nix, committee member Christian Matheson referred to a Cambridge Analytica document that the committee had obtained — described as a “digital overview” — and which listed “denial of service attacks” among the “digital interventions” apparently being offered by it as services.

Did you ever undertake any denial of service attacks, Nix was asked?

“So this was a company that we looked at forming, and we never formed. And that company never undertook any work whatsoever,” he responded. “In answer to your question, no we didn’t”

Why did you consider it, wondered Matheson?

“Uh, at the time we were looking at, uh, different technologies, expanding into different technological areas and, uh, this seemed like, uh, an interesting, uh, uh, business, but we didn’t have the capability was probably the truth to be able to deliver meaningfully in this business,” said Nix. “So.”

Matheson: “Was it illegal at that time?”

Nix: “I really don’t know. I can’t speak to technology like that.”

Matheson: “Right. Because it’s illegal now.”

Nix: “Right. I don’t know. It’s not something that we ever built. It’s not something that we ever undertook. Uh, it’s a company that was never realized.”

Matheson: “The only reason I ask is because it would give me concern that you have the mens rea to undertake activities which are, perhaps, outside the law. But if you never went ahead and did it, fair enough.”

Another moment of discomfort for Nix was when the committee pressed him about money transfers between Cambridge Analytica/SCL’s various entities in the US and UK — pointing out that if funds were being shifted across the Atlantic for political work and not being declared that could be legally problematic.

Though he fended this off by declining to answer — again citing ongoing investigations.

He was also asked where the various people had been based when Cambridge Analytica had been doing work for US campaigns and processing US voters’ data — with Collins pointing out that if that had been taking place outside the US it could be illegal under US law. But again he declined to answer.

“I’d love to explain this to you. But this again touches on some of these investigations — I simply can’t do that,” he said.

Zuckerberg didn’t make any friends in Europe today

Speaking in front of EU lawmakers today Facebook’s founder Mark Zuckerberg namechecked the GDPR’s core principles of “control, transparency and accountability” — claiming his company will deliver on all that, come Friday, when a new European Union data protection framework, GDPR, starts being applied, finally with penalties worth the enforcement.

However there was little transparency or accountability on show during the session, given the upfront questions format which saw Zuckerberg cherry-picking a few comfy themes to riff on after silently absorbing an hour of MEPs’ highly specific questions with barely a facial twitch in response.

The questions MEPs asked of Zuckerberg were wide ranging and often drilled deep into key pressure points around the ethics of Facebook’s business — ranging from how deep the app data misuse privacy scandal rabbithole goes; to whether the company is a monopoly that needs breaking up; to how users should be compensated for misuse of their data.

Is Facebook genuinely complying with GDPR, he was asked several times (unsurprisingly, given the scepticism of data protection experts on that front). Why did it choose to shift ~1.5BN users out of reach of the GDPR? Will it offer a version of its platform that lets people completely opt out of targeted advertising, as it has studiously avoided doing so so far.

Why did it refuse a public meeting with the EU parliament? Why has it spent “millions” lobbying against EU privacy rules? Will the company commit to paying taxes in the markets where it operates? What’s it doing to prevent fake accounts? What’s it doing to prevent bullying? Does it regulate content or is it a neutral platform?

Zuckerberg made like a sponge and absorbed all this fine-grained flak. But when the time came for responses the data flow was not reciprocal; Self-serving talking points on self-selected “themes” was all he had come prepared to serve up.

Yet — and here the irony is very rich indeed — people’s personal data flows liberally into Facebook, via all sorts of tracking technologies and techniques.

And as the Cambridge Analytica data misuse scandal has now made amply clear, people’s personal information has also very liberally leaked out of Facebook — oftentimes without their knowledge or consent.

But when it comes to Facebook’s own operations, the company maintains a highly filtered, extremely partial ‘newsfeed’ on its business empire — keeping a tight grip on the details of what data it collects and why.

Only last month Zuckerberg sat in Congress avoiding giving straight answers to basic operational questions. So if any EU parliamentarians had been hoping for actual transparency and genuine accountability from today’s session they would have been sorely disappointed.

Yes, you can download the data you’ve willingly uploaded to Facebook. Just don’t expect Facebook to give you a download of all the information it’s gathered and inferred about you.

The EU parliament’s political group leaders seemed well tuned to the myriad concerns now flocking around Facebook’s business. And were quick to seize on Zuckerberg’s dumbshow as further evidence that Facebook needs to be ruled.

Thing is, in Europe regulation is not a dirty word. And GDPR’s extraterritorial reach and weighty public profile looks to be further whetting political appetites.

So if Facebook was hoping the mere appearance of its CEO sitting in a chair in Brussels, going through the motions of listening before reading from his usual talking points, that looks to be a major miscalculation.

“It was a disappointing appearance by Zuckerberg. By not answering the very detailed questions by the MEPs he didn’t use the chance to restore trust of European consumers but in contrary showed to the political leaders in the European Parliament that stronger regulation and oversight is needed,” Green MEP and GDPR rapporteur Jan Philipp Albrecht told us after the meeting.

Albrecht had pressed Zuckerberg about how Facebook shares data between Facebook and WhatsApp — an issue that has raised the ire of regional data protection agencies. And while DPAs forced the company to turn off some of these data flows, Facebook continues to share other data.

The MEP had also asked Zuckerberg to commit to no exchange of data between the two apps. Zuckerberg determinedly made no such commitment.

Claude Moraes, chair of the EU parliament’s civil liberties, justice and home affairs (Libe) committee, issued a slightly more diplomatic reaction statement after the meeting — yet also with a steely undertone.

“Trust in Facebook has suffered as a result of the data breach and it is clear that Mr. Zuckerberg and Facebook will have to make serious efforts to reverse the situation and to convince individuals that Facebook fully complies with European Data Protection law. General statements like ‘We take privacy of our customers very seriously’ are not sufficient, Facebook has to comply and demonstrate it, and for the time being this is far from being the case,” he said.

“The Cambridge Analytica scandal was already in breach of the current Data Protection Directive, and would also be contrary to the GDPR, which is soon to be implemented. I expect the EU Data Protection Authorities to take appropriate action to enforce the law.”

Damian Collins, chair of the UK parliament’s DCMS committee, which has thrice tried and failed to get Zuckerberg to appear before it, did not mince his words at all. Albeit he has little reason to, having been so thoroughly rejected by the Facebook founder — and having accused the company of a pattern of evasive behavior to its CTO’s face — there’s clearly not much to hold out for now.

“What a missed opportunity for proper scrutiny on many crucial questions raised by the MEPs. Questions were blatantly dodged on shadow profiles, sharing data between WhatsApp and Facebook, the ability to opt out of political advertising and the true scale of data abuse on the platform,” said Collins in another reaction statement after the meeting. “Unfortunately the format of questioning allowed Mr Zuckerberg to cherry-pick his responses and not respond to each individual point.

“I echo the clear frustration of colleagues in the room who felt the discussion was shut down,” he added, ending with a fourth (doubtless equally forlorn) request for Zuckerberg to appear in front of the DCMS Committee to “provide Facebook users the answers they deserve”.

In the latter stages of today’s EU parliament session several MEPs — clearly very exasperated by the straightjacked format — resorted to heckling Zuckerberg to press for answers he had not given them.

“Shadow profiles,” interjected one, seizing on a moment’s hesitation as Zuckerberg sifted his notes for the next talking point. “Compensation,” shouted another, earning a snort of laughter from the CEO and some more theatrical note flipping to buy himself time.

Then, appearing slightly flustered, Zuckerberg looked up at one of the hecklers and said he would engage with his question — about shadow profiles (though Zuckerberg dare not speak that name, of course, given he claims not to recognize it) — arguing Facebook needs to hold onto such data for security purposes.

Zuckerberg did not specify, as MEPs had asked him to, whether Facebook uses data about non-users for any purposes other than the security scenario he chose to flesh out (aka “keeping bad content out”, as he put it).

He also ignored a second follow-up pressing him on how non-users can “stop that data being transferred”.

“On the security side we think it’s important to keep it to protect people in our community,” Zuckerberg said curtly, before turning to his lawyer for a talking point prompt (couched as an ask if there are “any other themes we wanted to get through”).

His lawyer hissed to steer the conversation back to Cambridge Analytica — to Facebook’s well-trodden PR about how they’re “locking down the platform” to stop any future data heists — and the Zuckbot was immediately back in action regurgitating his now well-practiced crisis PR around the scandal.

What was very clearly demonstrated during today’s session was the Facebook founder’s preference for control — that’s to say control which he is exercising.

Hence the fixed format of the meeting, which had been negotiated prior to Facebook agreeing to meet with EU politicians, and which clearly favored the company by allowing no formal opportunity for follow ups from MEPs.

Zuckerberg also tried several times to wrap up the meeting — by insinuating and then announcing time was up. MEPs ignored these attempts, and Zuckerberg seemed most uncomfortable at not having his orders instantly carried out.

Instead he had to sit and watch a micro negotiation between the EU parliament’s president and the political groups over whether they would accept written answers to all their specific questions from Facebook — before he was publicly put on the spot by president Antonio Tajani to agree to provide the answers in writing.

Although, as Collins has already warned MEPs, Facebook has had plenty of practice at generating wordy but empty responses to politicians’ questions about its business processes — responses which evade the spirit and specifics of what’s being asked.

The self-control on show from Zuckerberg today is certainly not the kind of guardrails that European politicians increasingly believe social media needs. Self-regulation, observed several MEPs to Zuckerberg’s face, hasn’t worked out so well has it?

The first MEP to lay out his questions warned Zuckerberg that apologizing is not enough. Another pointed out he’s been on a contrition tour for about 15 years now.

Facebook needs to make a “legal and moral commitment” to the EU’s fundamental values, he was told by Moraes. “Remember that you’re here in the European Union where we created GDPR so we ask you to make a legal and moral commitment, if you can, to uphold EU data protection law, to think about ePrivacy, to protect the privacy of European users and the many millions of European citizens and non-Facebook users as well,” said the Libe committee chair.

But self-regulation — or, the next best thing in Zuckerberg’s eyes: ‘Facebook-shaped regulation’ — was what he had come to advocate for, picking up on the MEPs’ regulation “theme” to respond with the same line he fed to Congress: “I don’t think the question here is whether or not there should be regulation. I think the question is what is the right regulation.”

“The Internet is becoming increasingly important in people’s lives. Some sort of regulation is important and inevitable. And the important thing is to get this right,” he continued. “To make sure that we have regulatory frameworks that help protect people, that are flexible so that they allow for innovation, that don’t inadvertently prevent new technologies like AI from being able to develop.”

He even brought up startups — claiming ‘bad regulation’ (I paraphrase) could present a barrier to the rise of future dormroom Zuckerbergs.

Of course he failed to mention how his own dominant platform is the attention-sapping, app gobbling elephant in the room crowding out the next generation of would-be entrepreneurs. But MEPs’ concerns about competition were clear.

Instead of making friends and influencing people in Brussels, Zuckerberg looks to have delivered less than if he’d stayed away — angering and alienating the very people whose job it will be to amend the EU legislation that’s coming down the pipe for his platform.

Ironically one of the few specific questions Zuckerberg chose to answer was a false claim by MEP Nigel Farage — who had wondered whether Facebook is still a “neutral political platform”, griping about drops in engagement for rightwing entities ever since Facebook’s algorithmic changes in January, before claiming, erroneously, that Facebook does not disclose the names of the third party fact checkers it uses to help it police fake news.

So — significantly, and as was also evident in the US Senate and Congress — Facebook was taking flak from both left and right of political spectrum, implying broad, cross-party support for regulating these algorithmic platforms.

Actually Facebook does disclose those fact checking partnerships. But it’s pretty telling that Zuckerberg chose to expend some of his oh-so-slender speaking time to debunk something that really didn’t merit the breath.

Farage had also claimed, during his three minutes, that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”. 

Funnily enough Zuckerberg didn’t make time to comment on that.