Tag Archives: app developer

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

Facebook data misuse firm snubs UK watchdog’s legal order

The company at the center of a major Facebook data misuse scandal has failed to respond to a legal order issued by the U.K.’s data protection watchdog to provide a U.S. voter with all the personal information it holds on him.

An enforcement notice was served on Cambridge Analytica affiliate SCL Elections last month and the deadline for a response passed without it providing a response today.

The enforcement order followed a complaint by the U.S. academic, professor David Carroll, that the original Subject Access Request (SAR) he made under European law seeking to obtain his personal data had not been satisfactorily fulfilled.

The academic has spent more than a year trying to obtain the data Cambridge Analytica/SCL held on him after learning the company had built psychographic profiles of U.S. voters for the 2016 presidential election, when it was working for the Trump campaign.

Speaking in front of the EU parliament’s justice, civil liberties and home affairs (LIBE) committee today, Carroll said: “We have heard nothing [from SCL in response to the ICO’s enforcement order]. So they have not respected the regulator. They have not co-operated with the regulator. They are not respecting the law, in my opinion. So that’s very troubling — because they seem to be trying to use liquidation to evade their responsibility as far as we can tell.”

While he is not a U.K. citizen, Carroll discovered his personal data had been processed in the U.K. so he decided to bring a test case under U.K. law. The ICO supported his complaint — and last month ordered Cambridge Analytica/SCL Elections to hand over everything it holds on him, warning that failure to comply with the order is a criminal offense that can carry an unlimited fine.

At the same time — and pretty much at the height of a storm of publicity around the data misuse scandal — Cambridge Analytica and SCL Elections announced insolvency proceedings, blaming what they described as “unfairly negative media coverage.”

Its Twitter account has been silent ever since. Though company directors, senior management and investors were quickly spotted attaching themselves to yet another data company. So the bankruptcy proceedings look rather more like an exit strategy to try to escape the snowballing scandal and cover any associated data trails.

There are a lot of data trails though. Back in April Facebook admitted that data on as many as 87 million of its users had been passed to Cambridge Analytica without most of the people’s knowledge or consent.

“I expected to help set precedents of data sovereignty in this case. But I did not expect to be trying to also set rules of liquidation as a way to avoid responsibility for potential data crimes,” Carroll also told the LIBE committee. “So now that this is seeming to becoming a criminal matter we are now in uncharted waters.

“I’m seeking full disclosure… so that I can evaluate if my opinions were influenced for the presidential election. I suspect that they were, I suspect that I was exposed to malicious information that was trying to [influence my vote] — whether it did is a different question.”

He added that he intends to continue to pursue a claim for full disclosure via the courts, arguing that the only way to assess whether psychographic models can successfully be matched to online profiles for the purposes of manipulating political opinions — which is what Cambridge Analytica/SCL stands accused of misusing Facebook data for — is to see how the company structured and processed the information it sucked out of Facebook’s platform.

“If the predictions of my personality are in 80-90% then we can understand that their model has the potential to affect a population — even if it’s just a tiny slice of the population. Because in the US only about 70,000 voters in three states decided the election,” he added.

What comes after Cambridge Analytica?

The LIBE committee hearing in the European Union’s parliament is the first of a series of planned sessions focused on digging into the Cambridge Analytica Facebook scandal and “setting out a way forward,” as committee chair Claude Moraes put it.

Today’s hearing took evidence from former Facebook employee turned whistleblower Sandy Parakilas; investigative journalist Carole Cadwalladr; Cambridge Analytica whistleblower Chris Wylie; and the U.K.’s ICO Elizabeth Denham, along with her deputy, James Dipple-Johnstone.

The Information Commissioner’s Office has been running a more-than-year-long investigation into political ad targeting on online platforms — which now of course encompasses the Cambridge Analytica scandal and much more besides.

Denham described it today as “unprecedented in scale” — and likely the largest investigation ever undertaken by a data protection agency in Europe.

The inquiry is looking at “exactly what data went where; from whom; and how that flowed through the system; how that data was combined with other data from other data brokers; what were the algorithms that were processed,” explained Dipple-Johnstone, who is leading the investigation for the ICO.

“We’re presently working through a huge volume — many hundreds of terabytes of data — to follow that audit trail and we’re committed to getting to the bottom of that,” he added. “We are looking at over 30 organizations as part of this investigation and the actions of dozens of key individuals. We’re investigating social media platforms, data brokers, analytics firms, political parties and campaign groups across all spectrums and academic institutions.

“We are looking at both regulatory and criminal breaches, and we are working with other regulators, EU data protection colleagues and law enforcement in the U.K. and abroad.”

He said the ICO’s report is now expected to be published at the end of this month.

Denham previously told a U.K. parliamentary committee she’s leaning toward recommending a code of conduct for the use of social media in political campaigns to avoid the risk of political uses of the technology getting ahead of the law — a point she reiterated today.

“Beyond data protection I expect my report will be relevant to other regulators overseeing electoral processes and also overseeing academic research,” she said, emphasizing that the recommendations will be relevant “well beyond the borders of the U.K.”

“What is clear is that work will need to be done to strengthen information-sharing and closer working across these areas,” she added.

Many MEPs asked the witnesses for their views on whether the EU’s new data protection framework, the GDPR, is sufficient to curb the kinds of data abuse and misuse that has been so publicly foregrounded by the Cambridge Analytica-Facebook scandal — or whether additional regulations are required?

On this Denham made a plea for GDPR to be “given some time to work.” “I think the GDPR is an important step, it’s one step but remember the GDPR is the law that’s written on paper — and what really matters now is the enforcement of the law,” she said.

“So it’s the activities that data protection authorities are willing to do. It’s the sanctions that we look at. It’s the users and the citizens who understand their rights enough to take action — because we don’t have thousands of inspectors that are going to go around and look at every system. But we do have millions of users and millions of citizens that can exercise their rights. So it’s the enforcement and the administration of the law. It’s going to take a village to change the scenario.

“You asked me if I thought this kind of activity which we’re speaking about today — involving Cambridge Analytica and Facebook — is happening on other platforms or if there’s other applications or if there’s misuse and misselling of personal data. I would say yes,” she said in response to another question from an MEP.

“Even in the political arena there are other political consultancies that are pairing up with data brokers and other data analytics companies. I think there is a lack of transparency for users across many platforms.”

Parakilas, a former Facebook platform operations manager — and the closest stand in for the company in the room — fielded many of the questions from MEPs, including being asked for suggestions for a legislative framework that “wouldn’t put breaks on the development of healthy companies” and also not be unduly burdensome on smaller companies.

He urged EU lawmakers to think about ways to incentivize a commercial ecosystem that works to encourage rather than undermine data protection and privacy, as well as ensuring regulators are properly resourced to enforce the law.

“I think the GDPR is a really important first step,” he added. “What I would say beyond that is there’s going to have to be a lot of thinking that is done about the next generation of technologies — and so while I think GDPR does a admirable job of addressing some of the issues with current technologies the stuff that’s coming is, frankly, when you think about the bad cases is terrifying.

“Things like deepfakes. The ability to create on-demand content that’s completely fabricated but looks real… Things like artificial intelligence which can predict user actions before those actions are actually done. And in fact Facebook is just one company that’s working on this — but the fact that they have a business model where they could potentially sell the ability to influence future actions using these predictions. There’s a lot of thinking that needs to be done about the frameworks for these new technologies. So I would just encourage you to engage as soon as possible on those new technologies.”

Parakilas also discussed fresh revelations related to how Facebook’s platform disseminates user data published by The New York Times at the weekend.

The newspaper’s report details how, until April, Facebook’s API was passing user and friend data to at least 60 device makers without gaining people’s consent — despite a consent decree the company struck with the Federal Trade Commission in 2011, which Parakilas suggested “appears to prohibit that kind of behavior.”

He also pointed out the device maker data-sharing “appears to contradict Facebook’s own testimony to Congress and potentially other testimony and public statements they’ve made” — given the company’s repeat claims, since the Cambridge Analytica scandal broke, that it “locked down” data-sharing on its platform in 2015.

Yet data was still flowing out to multiple device maker partners — apparently without users’ knowledge or consent.

“I think this is a very, very important developing story. And I would encourage everyone in this body to follow it closely,” he said.

Two more LIBE hearings are planned around the Cambridge Analytica scandal — one on June 25 and one on July 2 — with the latter slated to include a Facebook representative.

Mark Zuckerberg himself attended a meeting with the EU parliament’s Council of Presidents on May 22, though the format of the meeting was widely criticized for allowing the Facebook founder to cherry-pick questions he wanted to answer — and dodge those he didn’t.

MEPs pushed for Facebook to follow up with answers to their many outstanding questions — and two sets of Facebook responses have now been published by the EU parliament.

In its follow up responses the company claims, for example, that it does not create shadow profiles on non-users — saying it merely collects information on site visitors in the same way that “any website or app” might.

On the issue of compensation for EU users affected by the Cambridge Analytica scandal — something MEPs also pressed Zuckerberg on — Facebook claims it has not seen evidence that the app developer who harvested people’s data from its platform on behalf of Cambridge Analytica/SCL sold any EU users’ data to the company.

The developer, Dr. Aleksandr Kogan, had been contracted by SCL Elections for U.S.-related election work. Although his apps collected data on Facebook users from all over the world — including some 2.7 million EU citizens.

“We will conduct a forensic audit of Cambridge Analytica, which we hope to complete as soon as we are authorized by the UK’s Information Commissioner,” Facebook also writes on that.