Tag Archives: Mike Schroepfer

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Fake news inquiry calls for social media levy to defend democracy

A UK parliamentary committee which has been running a multi-month investigation into the impact of online disinformation on political campaigning — and on democracy itself — has published a preliminary report highlighting what it describes as “significant concerns” over the risks to “shared values and the integrity of our democratic institutions”.

It’s calling for “urgent action” from government and regulatory bodies to “build resilience against misinformation and disinformation into our democratic system”.

“We are faced with a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views,” the DCMS committee warns. “In particular, we heard evidence of Russian state-sponsored attempts to influence elections in the US and the UK through social media, of the efforts of private companies to do the same, and of law-breaking by certain Leave campaign groups in the UK’s EU Referendum in their use of social media.”

The inquiry, which was conceived of and begun in the previous UK parliament — before relaunching in fall 2017, after the June General Election — has found itself slap-bang in the middle of one of the major scandals of the modern era, as revelations about the extent of disinformation and social media data misuse and allegations of election fiddling and law bending have piled up thick and fast, especially in recent months (albeit, concerns have been rising steadily, ever since the 2016 US presidential election and revelations about the cottage industry of fake news purveyors spun up to feed US voters, in addition to Kremlin troll farm activity.)

Yet the Facebook-Cambridge Analytica data misuse saga (which snowballed into a major global scandal this March) is just one of the strands of the committee’s inquiry. Hence they’ve opted to publish multiple reports — the initial one recommending urgent actions for the government and regulators, which will be followed by another report covering the inquiry’s “wider terms of reference” and including a closer look at the role of advertising. (The latter report is slated to land in the fall.)

For now, the committee is suggesting “principle-based recommendations” designed to be “sufficiently adaptive to deal with fast-moving technological developments”. 

Among a very long list of recommendations are:

  • a levy on social media and tech giants to fund expanding a “major investment” in the UK’s data watchdog so the body is able to “attract and employ more technically-skilled engineers who not only can analyse current technologies, but have the capacity to predict future technologies” — with the tech company levy operating in “a similar vein to the way in which the banking sector pays for the upkeep of the Financial Conduct Authority”. Additionally, the committee also wants the government put forward proposals for an educational levy to be raised by social media companies, “to finance a comprehensive educational framework (developed by charities and non-governmental organisations) and based online”. “Digital literacy should be the fourth pillar of education, alongside reading, writing and maths,” the committee writes. “The DCMS Department should co-ordinate with the Department for Education, in highlighting proposals to include digital literacy, as part of the Physical, Social, Health and Economic curriculum (PSHE). The social media educational levy should be used, in part, by the government, to finance this additional part of the curriculum.” It also wants to see a rolling unified public awareness initiative, part-funded by a tech company levy, to “set the context of social media content, explain to people what their rights over their data are… and set out ways in which people can interact with political campaigning on social media. “The public should be made more aware of their ability to report digital campaigning that they think is misleading, or unlawful,” it adds
  • amendments to UK Electoral Law to reflect use of new technologies — with the committee backing the Electoral Commission’s suggestion that “all electronic campaigning should have easily accessible digital imprint requirements, including information on the publishing organisation and who is legally responsible for the spending, so that it is obvious at a glance who has sponsored that campaigning material, thereby bringing all online advertisements and messages into line with physically published leaflets, circulars and advertisements”. It also suggests government should “consider the feasibility of clear, persistent banners on all paid-for political adverts and videos, indicating the source and making it easy for users to identify what is in the adverts, and who the advertiser is”. And urges the government to carry out “a comprehensive review of the current rules and regulations surrounding political work during elections and referenda, including: increasing the length of the regulated period; definitions of what constitutes political campaigning; absolute transparency of online political campaigning; a category introduced for digital spending on campaigns; reducing the time for spending returns to be sent to the Electoral Commission (the current time for large political organisations is six months)”.
  • the Electoral Commission to establish a code for advertising through social media during election periods “giving consideration to whether such activity should be restricted during the regulated period, to political organisations or campaigns that have registered with the Commission”. It also urges the Commission to propose “more stringent requirements for major donors to demonstrate the source of their donations”, and backs its suggestion of a change in the rules covering political spending so that limits are put on the amount of money an individual can donate
  • a major increase in the maximum fine that can be levied by the Electoral Commission (currently just £20,000) — saying this should rather be based on a fixed percentage of turnover. It also suggests the body should have the ability to refer matters to the Crown Prosecution Service, before their investigations have been completed; and urges the government to consider giving it the power to compel organisations that it does not specifically regulate, including tech companies and individuals, to provide information relevant to their inquiries, subject to due process.
  • a public register for political advertising — “requiring all political advertising work to be listed for public display so that, even if work is not requiring regulation, it is accountable, clear, and transparent for all to see”. So it also wants the government to conduct a review of UK law to ensure that digital campaigning is defined in a way that includes online adverts that use political terminology but are not sponsored by a specific political party.
  • a ban on micro-targeted political advertising to lookalikes online, and a minimum limit for the number of voters sent individual political messages to be agreed at a national level. The committee also suggests the Electoral Commission and the ICO should consider the ethics of Facebook or other relevant social media companies selling lookalike political audiences to advertisers during the regulated period, saying they should consider whether users should have the right to opt out from being included in such lookalike audiences
  • a recommendation to formulate a new regulatory category for tech companies that is not necessarily either a platform or a publisher, and which “tightens tech companies’ liabilities”
  • a suggestion that the government consider (as part of an existing review of digital advertising) whether the Advertising Standards Agency could regulate digital advertising. “It is our recommendation that this process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms,” the committee writes. “This should include both content that has been referred to them for takedown by their users, and other content that should have been easy for the tech companies to identify for themselves. In these cases, failure to act on behalf of the tech companies could leave them open to legal proceedings launched either by a public regulator, and/or by individuals or organisations who have suffered as a result of this content being freely disseminated on a social media platform.”
  • another suggestion that the government consider establishing a “digital Atlantic Charter as a new mechanism to reassure users that their digital rights are guaranteed” — with the committee also raising concerns that the UK risks a privacy loophole opening up after it leave the EU when US-based companies will be able to take UK citizens’ data to the US for processing without the protections afforded by the EU’s GDPR framework (as the UK will then be a third country)
  • a suggestion that a professional “global Code of Ethics” should be developed by tech companies, in collaboration with international governments, academics and other “interested parties” (including the World Summit on Information Society), in order to “set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies”. “New products should be tested to ensure that products are fit-for-purpose and do not constitute dangers to the users, or to society,” it suggests. “The Code of Ethics should be the backbone of tech companies’ work, and should be continually referred to when developing new technologies and algorithms. If companies fail to adhere to their own Code of Ethics, the UK Government should introduce regulation to make such ethical rules compulsory.”
  • the committee also suggests the government avoids using the (charged and confusing) term ‘fake news’ — and instead puts forward an agreed definition of the words ‘misinformation’ and ‘disinformation’. It should also support research into the methods by which misinformation and disinformation are created and spread across the internet, including support for fact-checking. “We recommend that the government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites,” it writes
  • a suggestion that tech companies should be subject to security and algorithmic auditing — with the committee writing: “Just as the finances of companies are audited and scrutinised, the same type of auditing and scrutinising should be carried out on the non-financial aspects of technology companies, including their security mechanisms and algorithms, to ensure they are operating responsibly. The Government should provide the appropriate body with the power to audit these companies, including algorithmic auditing, and we reiterate the point that the ICO’s powers should be substantially strengthened in these respects”. The committee also floats the idea that the Competition and Markets Authority considers conducting an audit of the operation of the advertising market on social media (given the risk of fake accounts leading to ad fraud)
  • a requirement for tech companies to make full disclosure of targeting used as part of advert transparency. The committee says tech companies must also address the issue of shell corporations and “other professional attempts to hide identity in advert purchasing.

How the government will respond to the committee’s laundry list of recommendations for cleaning up online political advertising remains to be seen, although the issue of Kremlin-backed disinformation campaigns was at least raised publicly by the prime minister last year. Although Theresa May has been rather quieter on revelations about EU referendum-related data misuse and election law breaches.

While the committee uses the term “tech companies” throughout its report to refer to multiple companies, Facebook specifically comes in for some excoriating criticism, with the committee accusing the company of misleading by omission and actively seeking to impede the progress of the inquiry.

It also reiterates its call — for something like the fifth time at this point — for founder Mark Zuckerberg to give evidence. Facebook has provided several witnesses to the committee, including its CTO, but Zuckerberg has declined its requests he appear, even via video link. (And even though he did find time for a couple of hours in front of the EU parliament back in May.)

The committee writes:

We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry, unless it had been expressly asked for. It provided witnesses who have been unwilling or unable to give full answers to the Committee’s questions. This is the reason why the Committee has continued to press for Mark Zuckerberg to appear as a witness as, by his own admission, he is the person who decides what happens at Facebook.

Tech companies are not passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. This manipulation of the sites by tech companies must be made more transparent. Facebook has all of the information. Those outside of the company have none of it, unless Facebook chooses to release it. Facebook was reluctant to share information with the Committee, which does not bode well for future transparency. We ask, once more, for Mr Zuckerberg to come to the Committee to answer the many outstanding questions to which Facebook has not responded adequately, to date.

The committee suggests that the UK’s Defamation Act 2013 means Facebook and other social media companies have a duty to publish and to follow transparent rules — arguing that the Act has provisions which state that “if a user is defamed on social media, and the offending individual cannot be identified, the liability rests with the platform”.

“We urge the government to examine the effectiveness of these provisions, and to monitor tech companies to ensure they are complying with court orders in the UK and to provide details of the source of disputed content– including advertisements — to ensure that they are operating in accordance with the law, or any future industry Codes of Ethics or Conduct. Tech companies also have a responsibility to ensure full disclosure of the source of any political advertising they carry,” it adds.

The committee is especially damning of Facebook’s actions in Burma (as indeed many others have also been), condemning the company’s failure to prevent its platform from being used to spread hate and fuel violence against the Rohingya ethnic minority — and citing the UN’s similarly damning assessment.

“Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed. Time and again we heard from Facebook about mistakes being made and then (sometimes) rectified, rather than designing the product ethically from the beginning of the process. Facebook has a ‘Code of Conduct’, which highlights the principles by which Facebook staff carry out their work, and states that employees are expected to “act lawfully, honestly, ethically, and in the best interests of the company while performing duties on behalf of Facebook”. Facebook has fallen well below this standard in Burma,” it argues.

The committee also directly blames Facebook’s actions for undermining the UK’s international aid efforts in the country — writing:

The United Nations has named Facebook as being responsible for inciting hatred against the Rohingya Muslim minority in Burma, through its ‘Free Basics’ service. It provides people free mobile phone access without data charges, but is also responsible for the spread disinformation and propaganda. The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority.

The hate speech against the Rohingya—built up on Facebook, much of which is disseminated through fake accounts—and subsequent ethnic cleansing, has potentially resulted in the success of DFID’s [the UK Department for International Development] aid programmes being greatly reduced, based on the qualifications they set for success. The activity of Facebook undermines international aid to Burma, including the UK Government’s work. Facebook is releasing a product that is dangerous to consumers and deeply unethical. We urge the Government to demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity. This is a further example of Facebook failing to take responsibility for the misuse of its platform.

We reached out to Facebook for a response to the committee’s report, and in an email statement — attributed to Richard Allan, VP policy — the company told us:

The Committee has raised some important issues and we were pleased to be able to contribute to their work.

We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent. We provide more information on the Pages behind any ad and you can now see all the ads any Facebook Page is running, even if they are not targeted at you. We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.

We’re also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone needed to report this to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly.

The statement makes no mention of Burma. Nor indeed of the committee’s suggestion that social media firms should be taxed to pay for defending democracy and civil society against the damaging excesses of their tools.

On Thursday, rolling out its latest ads transparency features, Facebook announced that users could now see the ads a Page is running across Facebook, Instagram, Messenger and its partner network “even if those ads aren’t shown to you”.

To do so, users have to log into Facebook, visit any Page and select “Info and Ads”. “You’ll see ad creative and copy, and you can flag anything suspicious by clicking on ‘Report Ad’,” it added.

It also flagged a ‘more Page information’ feature that users can use to get more details about a Page such as recent name changes and the date it was created.

“The vast majority of ads on Facebook are run by legitimate organizations — whether it’s a small business looking for new customers, an advocacy group raising money for their cause, or a politician running for office. But we’ve seen that bad actors can misuse our products, too,” Facebook wrote, adding that the features being announced “are just the start” of its efforts “to improve” and “root out abuse”.

Brexit drama

The committee’s interim report was pushed out at the weekend ahead of the original embargo as a result of yet more Brexiteer-induced drama — after the campaign director of the UK’s official Brexit supporting ‘Vote Leave’ campaign, Dominic Cummings, deliberately broke the embargo by publishing the report on his blog in order to spin his own response before the report had been widely covered by the media.

Last week the Electoral Commission published its own report following a multi-month investigation into Brexit campaign spending. The oversight body concluded that Vote Leave broke UK electoral law by massively overspending via a joint working arrangement with another Brexit supporting campaign (BeLeave) — an arrangement via which an additional almost half a million pound’s worth of targeted Facebook ads were co-ordinated by Vote Leave in the final days of the campaign when it had already reached its spending limit (Facebook finally released some of the 2016 Brexit campaign ads that had been microtargeted at UK voters via its platform to the committee, which published these ads last week. Many of Vote Leave’s (up to that point ‘dark’) adverts show the official Brexit campaign generating fake news of its own with ads that, for example, claim Turkey is on the cusp of joining the EU and flooding the UK with millions of migrants; or spreading the widely debunked claim that the UK would be able to spend £350M more per week on the NHS if it left the EU.

In general, dog whistle racism appears to have been Vote Leave’s preferred ‘persuasion’ tactic of microtargeted ad choice — and thanks to Facebook’s ad platform, no one other than each ad’s chosen recipients would have been likely to see the messages.

Cummings also comes in for a major roasting in the committee’s report after his failure to appear before it to answer questions, despite multiple summons (including an unprecedented House of Commons motion ordering him to appear — which he nonetheless also ignored).

“Mr Cummings’ contemptuous behaviour is unprecedented in the history of this Committee’s inquiries and underlines concerns about the difficulties of enforcing co-operation with Parliamentary scrutiny in the modern age,” it writes, adding: “We will return to this issue in our Report in the autumn, and believe it to be an urgent matter for consideration by the Privileges Committee and by Parliament as a whole.”

On his blog, Cummings claims the committee offered him dates they knew he couldn’t do; slams its investigation as ‘fake news’; moans copiously that the committee is made up of Remain supporting MPs; and argues that the investigation should be under oath — as his major defense seems to be that key campaign whistleblowers are lying (despite ex-Cambridge Analytica employee Chris Wylie and ex-BeLeave treasurer Shahmir Sanni having provided copious amounts of documentary evidence to back up their claims; evidence which both the Electoral Commission and the UK’s data watchdog, the ICO, have found convincing enough to announce some of the largest fines they can issue — in the latter case, the ICO announced its intention to fine Facebook the maximum penalty possible (under the prior UK data protection regime) for failing to protect users’ information. (The data watchdog is continuing to investigate multiple aspects of what is a hugely complex (technically and politically) online ad ops story, and earlier this month commissioner Elizabeth Denham called for an ‘ethical pause’ on the use of online ad platforms for microtargeting voters with political messages, arguing — like the DCMS committee — that there are very real and very stark risks for democratic processes).

There’s much, much more self-piteous whining on Cummings blog for anyone who wants to make themselves queasy reading. But bear in mind the Electoral Commission’s withering criticism of the Vote Leave campaign specifically — for not so much failure to co-operate with its investigation but intentional obstruction.

Zuckerberg again snubs UK parliament over call to testify

Facebook has once again eschewed a direct request from the UK parliament for its CEO, Mark Zuckerberg, to testify to a committee investigating online disinformation — without rustling up so much as a fig-leaf-sized excuse to explain why the founder of one of the world’s most used technology platforms can’t squeeze a video call into his busy schedule and spare UK politicians’ blushes.

Which tells you pretty much all you need to know about where the balance of power lies in the global game of (essentially unregulated) U.S. tech platforms giants vs (essentially powerless) foreign political jurisdictions.

At the end of an 18-page letter sent to the DCMS committee yesterday — in which Facebook’s UK head of public policy, Rebecca Stimson, provides a point-by-point response to the almost 40 questions the committee said had not been adequately addressed by CTO Mike Schroepfer in a prior hearing last month — Facebook professes itself disappointed that the CTO’s grilling was not deemed sufficient by the committee.

“While Mark Zuckerberg has no plans to meet with the Committee or travel to the UK at the present time, we fully recognize the seriousness of these issues and remain committed to providing any additional information required for their enquiry into fake news,” she adds.

So, in other words, Facebook has served up another big fat ‘no’ to the renewed request for Zuckerberg to testify — after also denying a request for him to appear before it in March, when it instead sent Schroepfer to claim to be unable to answer MPs’ questions.

At the start of this month committee chair Damian Collins wrote to Facebook saying he hoped Zuckerberg would voluntarily agree to answer questions. But the MP also took the unprecedented step of warning that if the Facebook founder did not do so the committee would issue a formal summons for him to appear the next time Zuckerberg steps foot in the UK.

Hence, presumably, that addendum line in Stimson’s letter — saying the Facebook CEO has no plans to travel to the UK “at the present time”.

The committee of course has zero powers to comply testimony from a non-UK national who is resident outside the UK — even though the platform he controls does plenty of business within the UK.

Last month Schroepfer faced five hours of close and at times angry questions from the committee, with members accusing his employer of lacking integrity and displaying a pattern of intentionally deceptive behavior.

The committee has been specifically asking Facebook to provide it with information related to the UK’s 2016 EU referendum for months — and complaining the company has narrowly interpreted its requests to sidestep a thorough investigation.

More recently research carried out by the Tow Center unearthed Russian-bought UK targeted immigration ads relevant to the Brexit referendum among a cache Facebook had provided to Congress — which the company had not disclosed to the UK committee.

At the end of the CTO’s evidence session last month the committee expressed immediate dissatisfaction — claiming there were almost 40 outstanding questions the CTO had failed to answer, and calling again for Zuckerberg to testify.

It possibly overplayed its hand slightly, though, giving Facebook the chance to serve up a detailed (if not entirely comprehensive) point-by-point reply now — and use that to sidestep the latest request for its CEO to testify.

Still, Collins expressed fresh dissatisfaction today, saying Facebook’s answers “do not fully answer each point with sufficient detail or data evidence”, and adding the committee would be writing to the company in the coming days to ask it to address “significant gaps” in its answers. So this game of political question and self-serving answer is set to continue.

In a statement, Collins also criticized Facebook’s response at length, writing:

It is disappointing that a company with the resources of Facebook chooses not to provide a sufficient level of detail and transparency on various points including on Cambridge Analytica, dark ads, Facebook Connect, the amount spent by Russia on UK ads on the platform, data collection across the web, budgets for investigations, and that shows general discrepancies between Schroepfer and Zuckerberg’s respective testimonies. Given that these were follow up questions to questions Mr Schroepfer previously failed to answer, we expected both detail and data, and in a number of cases got excuses.

If Mark Zuckerberg truly recognises the ‘seriousness’ of these issues as they say they do, we would expect that he would want to appear in front of the Committee and answer questions that are of concern not only to Parliament, but Facebook’s tens of millions of users in this country. Although Facebook says Mr Zuckerberg has no plans to travel to the UK, we would also be open to taking his evidence by video link, if that would be the only way to do this during the period of our inquiry.

For too long these companies have gone unchallenged in their business practices, and only under public pressure from this Committee and others have they begun to fully cooperate with our requests. We plan to write to Facebook in the coming days with further follow up questions.

In terms of the answers Facebook provides to the committee in its letter (plus some supporting documents related to the Cambridge Analytica data misuse scandal) there’s certainly plenty of padding on show. And deploying self-serving PR to fuzz the signal is a strategy Facebook has mastered in recent more challenging political times (just look at its ‘Hard Questions’ series to see this tactic at work).

At times Facebook’s response to political attacks certainly looks like an attempt to drown out critical points by deploying self-serving but selective data points — so, for instance, it talks at length in the letter about the work it’s doing in Myanmar, where its platform has been accused by the UN of accelerating ethnic violence as a result of systematic content moderation failures, but declines to state how many fake accounts it’s identified and removed in the market; nor will it disclose how much revenue it generates from the market.

Asked by the committee what the average time to respond to content flagged for review in the region, Facebook also responds in the letter with the vaguest of generalized global data points — saying: “The vast majority of the content reported to us is reviewed within 24 hours.” Nor does it specify if that global average refers to human review — or just an AI parsing the content.

Another of the committee’s questions is: ‘Who was the person at Facebook responsible for the decision not to tell users affected in 2015 by the Cambridge Analytica data misuse scandal?’ On this Facebook provides three full paragraphs of response but does not provide a direct answer specifying who decided not to tell users at that point — so either the company is concealing the identity of the person responsible or there simply was no one in charge of that kind of consideration at that time because user privacy was so low a priority for the company that it had no responsibility structures in place to enforce it.

Another question — ‘who at Facebook heads up the investigation into Cambridge Analytica?’ — does get a straight and short response, with Facebook saying its legal team, led by general counsel Colin Stretch, is the lead there.

It also claims that Zuckerberg himself only become aware of the allegations that Cambridge Analytica may not have deleted Facebook user data in March 2018 following press reports.

Asked what data it holds on dark ads, Facebook provides some information but it’s also being a bit vague here too — saying: “In general, Facebook maintains for paid advertisers data such as name, address and banking details”, and: “We also maintain information about advertiser’s accounts on the Facebook platform and information about their ad campaigns (most advertising content, run dates, spend, etc).”

It does also confirms it can retain the aforementioned data even if a page has been deleted — responding to another of the committee’s questions about how the company would be able to audit advertisers who set up to target political ads during a campaign and immediately deleted their presence once the election was over.

Though, given it’s said it only generally retains data, we must assume there are instances where it might not retain data and the purveyors of dark ads are essentially untraceable via its platform — unless it puts in place a more robust and comprehensive advertiser audit framework.

The committee also asked Facebook’s CTO whether it retains money from fraudulent ads running on its platform, such as the ads at the center of a defamation lawsuit by consumer finance personality Martin Lewis. On this Facebook says it does not “generally” return money to an advertiser when it discovers a policy violation — claiming this “would seem perverse” given the attempt to deceive users. Instead it says it makes “investments in areas to improve security on Facebook and beyond”.

Asked by the committee for copies of the Brexit ads that a Cambridge Analytica linked data company, AIQ, ran on its platform, Facebook says it’s in the process of compiling the content and notifying the advertisers that the committee wants to see the content.

Though it does break out AIQ ad spending related to different vote leave campaigns, and says the individual campaigns would have had to grant the Canadian company admin access to their pages in order for AIQ to run ads on their behalf.

The full letter containing all Facebook’s responses can be read here.

UK parliament’s call for Zuckerberg to testify goes next level

The UK parliament has issued an impressive ultimatum to Facebook in a last-ditch attempt to get Mark Zuckerberg to take its questions: Come and give evidence voluntarily or next time you fly to the UK you’ll get a formal summons to appear.

“Following reports that he will be giving evidence to the European Parliament in May, we would like Mr Zuckerberg to come to London during his European trip. We would like the session here to place by 24 May,” the committee writes in its latest letter to the company, signed by its chair, Conservative MP Damian Collins.

“It is worth noting that, while Mr Zuckerberg does not normally come under the jurisdiction of the UK Parliament, he will do so the next time he enters the country,” he adds. “We hope that he will respond positively to our request, but if not the Committee will resolve to issue a formal summons for him to appear when he is next in the UK.”

Facebook has repeatedly ignored the DCMS committee‘s requests that its CEO and founder appear before it — preferring to send various minions to answer questions related to its enquiry into online disinformation and the role of social media in politics and democracy.

The most recent Zuckerberg alternative to appear before it was also the most senior: Facebook’s CTO, Mike Schroepfer, who claimed he had personally volunteered to make the trip to London to give evidence.

However for all Schroepfer’s sweating toil to try to stand in for the company’s chief exec, his answers failed to impress UK parliamentarians. And immediately following the hearing the committee issued a press release repeating their call for Zuckerberg to testify, noting that Schroepfer had failed to provide adequate answers to as many of 40 of its questions.

Schroepfer did sit through around five hours of grilling on a wide range of topics with the Cambridge Analytica data misuse scandal front and center — the story having morphed into a major global scandal for the company after fresh revelations were published by the Guardian in March (although the newspaper actually published its first story about Facebook data misuse by the company all the way back in December 2015) — though in last week’s hearing Schroepfer frequently fell back on claiming he didn’t know the answer and would have to “follow up”.

Yet the committee has been asking Facebook for straight answers for months. So you can see why it’s really mad now.

We reached out to Facebook to ask whether its CEO will now agree to personally testify in front of the committee by May 24, per its request, but the company declined to provide a public statement on the issue.

A company spokesperson did say it would be following up with the committee to answer any outstanding questions it had after Schroepfer’s session.

It’s fair to say Facebook has handled this issue exceptionally badly — leaving Collins to express public frustration about the lack of co-operation when, for example, he had asked it for help and information related to the UK’s Brexit referendum — turning what could have been a fairly easy to manage process into a major media circus-cum-PR nightmare.

Last week Schroepfer was on the sharp end of lots of awkward questions from visibly outraged committee members, with Collins pointing to what he dubbed a “pattern of behavior” by Facebook that he said suggested an “unwillingness to engage, and a desire to hold onto information and not disclose it”.

Committee members also interrogated Schroepfer about why another Facebook employee who appeared before it in February had not disclosed an existing agreement between Facebook and Cambridge Analytica .

“I remain to be convinced that your company has integrity,” he was told bluntly at one point during the hearing.

If Zuckerberg does agree to testify he’ll be in for an even bumpier ride. And, well, if he doesn’t it looks pretty clear the Facebook CEO won’t be making any personal trips to the UK for a while.

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.

What we learned from Facebook’s latest data misuse grilling

Facebook’s CTO Mike Schroepfer has just undergone almost five hours of often forensic and frequently awkward questions from members of a UK parliament committee that’s investigating online disinformation, and whose members have been further fired up by misinformation they claim Facebook gave it.

The veteran senior exec, who’s clocked up a decade at the company, also as its VP of engineering, is the latest stand-in for CEO Mark Zuckerberg who keeps eschewing repeat requests to appear.

The DCMS committee’s enquiry began last year as a probe into ‘fake news’ but has snowballed in scope as the scale of concern around political disinformation has also mounted — including, most recently, fresh information being exposed by journalists about the scale of the misuse of Facebook data for political targeting purposes.

During today’s session committee chair Damian Collins again made a direct appeal for Zuckerberg to testify, pausing the flow of questions momentarily to cite news reports suggesting the Facebook founder has agreed to fly to Brussels to testify before European Union lawmakers in relation to the Cambridge Analytica Facebook data misuse scandal.

“We’ll certainly be renewing our request for him to give evidence,” said Collins. “We still do need the opportunity to put some of these questions to him.”

Committee members displayed visible outrage during the session, accusing Facebook of concealing the truth or at very least concealing evidence from it at a prior hearing that took place in Washington in February — when the company sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field questions.

During questioning Milner and Bickert failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015 — after the company had learned (via an earlier Guardian article) that Facebook user data had been passed to the company by the developer of an app running on its platform.

Milner also told the committee that Cambridge Analytica could not have any Facebook data — yet last month the company admitted data on up to 87 million of its users had indeed been passed to the firm.

Schroepfer said he wasn’t sure whether Milner had been “specifically informed” about the agreement Facebook already had with Cambridge Analytica — adding: “I’m guessing he didn’t know”. He also claimed he had only himself become aware of it “within the last month”.

Who knows? Who knows about what the position was with Cambridge Analytica in February of this year? Who was in charge of this?” pressed one committee member.

“I don’t know all of the names of the people who knew that specific information at the time,” responded Schroepfer.

“We are a parliamentary committee. We went to Washington for evidence and we raised the issue of Cambridge Analytica. And Facebook concealed evidence to us as an organization on that day. Isn’t that the truth?” rejoined the committee member, pushing past Schroepfer’s claim to be “doing my best” to provide it with information.

A pattern of evasive behavior

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?”

“It stops with Mark,” replied Schroepfer — leading to a quick fire exchange where he was pressed about (and avoided answering) what Zuckerberg knew and why the Facebook founder wouldn’t come and answer the committee’s questions himself.

“What we want is the truth. We didn’t get the truth in February… Mr Schroepfer I remain to be convinced that your company has integrity,” was the pointed conclusion after a lengthy exchange on this.

“What’s been frustrating for us in this enquiry is a pattern of behavior from the company — an unwillingness to engage, and a desire to hold onto information and not disclose it,” said Collins, returning to the theme at another stage of the hearing — and also accusing Facebook of not providing it with “straight answers” in Washington.

“We wouldn’t be having this discussion now if this information hadn’t been brought into the light by investigative journalists,” he continued. “And Facebook even tried to stop that happening as well [referring to a threat by the company to sue the Guardian ahead of publication of its Cambridge Analytica exposé]… It’s a pattern of behavior, of seeking to pretend this isn’t happening.”

The committee expressed further dissatisfaction with Facebook immediately following the session, emphasizing that Schroepfer had “failed to answer fully on nearly 40 separate points”.

“Mr Schroepfer, Mark Zuckerberg’s right hand man whom we were assured could represent his views, today failed to answer many specific and detailed questions about Facebook’s business practices,” said Collins in a statement after the hearing.

“We will be asking him to respond in writing to the committee on these points; however, we are mindful that it took a global reputational crisis and three months for the company to follow up on questions we put to them in Washington D.C. on February 8

“We believe that, given the large number of outstanding questions for Facebook to answer, Mark Zuckerberg should still appear in front of the Committee… and will request that he appears in front of the DCMS Committee before the May 24.”

We reached out to Facebook for comment — but at the time of writing the company had not responded.

Palantir’s data use under review

Schroepfer was questioned on a wide range of topics during today’s session. And while he was fuzzy on many details, giving lots of partial answers and promises to “follow up”, one thing he did confirm was that Facebook board member Peter Thiel’s secretive big data analytics firm, Palantir, is one of the companies Facebook is investigating as part of a historical audit of app developers’ use of its platform.

Have there ever been concerns raised about Palantir’s activity, and about whether it has gained improper access to Facebook user data, asked Collins.

“I think we are looking at lots of different things now. Many people have raised that concern — and since it’s in the public discourse it’s obviously something we’re looking into,” said Schroepfer.

“But it’s part of the review work that Facebook’s doing?” pressed Collins.

“Correct,” he responded.

The historical app audit was announced in the wake of last month’s revelations about how much Facebook data Cambridge Analytica was given by app developer (and Cambridge University academic), Dr Aleksandr Kogan — in what the company couched as a “breach of trust”.

However Kogan, who testified to the committee earlier this week, argues he was just using Facebook’s platform as it was architected and intended to be used — going so far as to claim its developer terms are “not legally valid”. (“For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy,” was Kogan’s construction, earning him a quip from a committee member that he “should be a professor of semantics”.)

Schroepfer said he disagreed with Kogan’s assessment that Facebook didn’t have a policy, saying the goal of the platform has been to foster social experiences — and that “those same tools, because they’re easy and great for the consumer, can go wrong”. So he did at least indirectly confirm Kogan’s general point that Facebook’s developer and user terms are at loggerheads.

“This is why we have gone through several iterations of the platform — where we have effectively locked down parts of the platform,” continued Schroepfer. “Which increases friction and makes it less easy for the consumer to use these things but does safeguard that data more. And been a lot more proactive in the review and enforcement of these things. So this wasn’t a lack of care… but I’ll tell you that our primary product is designed to help people share safety with a limited audience.

“If you want to say it to the world you can publish it on a blog or on Twitter. If you want to share it with your friends only, that’s the primary thing Facebook does. We violate that trust — and that data goes somewhere else — we’re sort of violating the core principles of our product. And that’s a big problem. And this is why I wanted to come to you personally today to talk about this because this is a serious issue.”

“You’re not just a neutral platform — you are players”

The same committee member, Paul Farrelly, who earlier pressed Kogan about why he hadn’t bothered to find out which political candidates stood to be the beneficiary of his data harvesting and processing activities for Cambridge Analytica, put it to Schroepfer that Facebook’s own actions in how it manages its business activities — and specifically because it embeds its own staff with political campaigns to help them use its tools — amounts to the company being “Dr Kogan writ large”.

“You’re not just a neutral platform — you are players,” said Farrelly.

“The clear thing is we don’t have an opinion on the outcome of these elections. That is not what we are trying to do. We are trying to offer services to any customer of ours who would like to know how to use our products better,” Schroepfer responded. “We have never turned away a political party because we didn’t want to help them win an election.

“We believe in strong open political discourse and what we’re trying to do is make sure that people can get their messages across.”

However in another exchange the Facebook exec appeared not to be aware of a basic tenet of UK election law — which prohibits campaign spending by foreign entities.

“How many UK Facebook users and Instagram users were contacted in the UK referendum by foreign, non-UK entities?” asked committee member Julie Elliott.

“We would have to understand and do the analysis of who — of all the ads run in that campaign — where is the location, the source of all of the different advertisers,” said Schroepfer, tailing off with a “so…” and without providing a figure. 

“But do you have that information?” pressed Elliott.

“I don’t have it on the top of my head. I can see if we can get you some more of it,” he responded.

“Our elections are very heavily regulated, and income or monies from other countries can’t be spent in our elections in any way shape or form,” she continued. “So I would have thought that you would have that information. Because your company will be aware of what our electoral law is.”

“Again I don’t have that information on me,” Schroepfer said — repeating the line that Facebook would “follow up with the relevant information”.

The Facebook CTO was also asked if the company could provide it with an archive of adverts that were run on its platform around the time of the Brexit referendum by Aggregate IQ — a Canadian data company that’s been linked to Cambridge Analytica/SCL, and which received £3.5M from leave campaign groups in the run up to the 2016 referendum (and has also been described by leave campaigners as instrumental to securing their win). It’s also under joint investigation by Canadian data watchdogs, along with Facebook.

In written evidence provided to the committee today Facebook says it has been helping ongoing investigations into “the Cambridge Analytica issue” that are being undertaken by the UK’s Electoral Commission and its data protection watchdog, the ICO. Here it writes that its records show AIQ spent “approximately $2M USD on ads from pages that appear to be associated with the 2016 Referendum”.

Schroepfer’s responses on several requests by the committee for historical samples of the referendum ads AIQ had run amounted to ‘we’ll see what we can do’ — with the exec cautioning that he wasn’t entirely sure how much data might have been retained.

“I think specifically in Aggregate IQ and Cambridge Analytica related to the UK referendum I believe we are producing more extensive information for both the Electoral Commission and the Information Commissioner,” he said at one point, adding it would also provide the committee with the same information if it’s legally able to. “I think we are trying to do — give them all the data we have on the ads and what they spent and what they’re like.”

Collins asked what would happen if an organization or an individual had used a Facebook ad account to target dark ads during the referendum and then taken down the page as soon as the campaign was over. “How would you be able to identify that activity had ever taken place?” he asked.

“I do believe, uh, we have — I would have to confirm, but there is a possibility that we have a separate system — a log of the ads that were run,” said Schroepfer, displaying some of the fuzziness that irritated the committee. “I know we would have the page itself if the page was still active. If they’d run prior campaigns and deleted the page we may retain some information about those ads — I don’t know the specifics, for example how detailed that information is, and how long retention is for that particular set of data.”

Dark ads a “major threat to democracy”

Collins pointed out that a big part of UK (and indeed US) election law relates to “declaration of spent”, before making the conjoined point that if someone is “hiding that spend” — i.e. by placing dark ads that only the recipient sees, and which can be taken offline immediately after the campaign — it smells like a major risk to the democratic process.

“If no one’s got the ability to audit that, that is a major threat to democracy,” warned Collins. “And would be a license for a major breach of election law.”

“Okay,” responded Schroepfer as if the risk had never crossed his mind before. “We can come back on the details on that.”

On the wider app audit that Facebook has committed to carrying out in the wake of the scandal, Schroepfer was also asked how it can audit apps or entities that are no longer on the platform — and he admitted this is “a challenge” and said Facebook won’t have “perfect information or detail”.

“This is going to be a challenge again because we’re dealing with historic events so we’re not going to have perfect information or detail on any of these things,” he said. “I think where we start is — it very well may be that this company is defunct but we can look at how they used the platform. Maybe there’s two people who used the app and they asked for relatively innocuous data — so the chance that that is a big issue is a lot lower than an app that was widely in circulation. So I think we can at least look at that sort of information. And try to chase down the trail.

“If we have concerns about it even if the company is defunct it’s possible we can find former employees of the company who might have more information about it. This starts with trying to identify where the issues might be and then run the trail down as much as we can. As you highlight, though, there are going to be limits to what we can find. But our goal is to understand this as best as we can.”

The committee also wanted to know if Facebook had set a deadline for completing the audit — but Schroepfer would only say it’s going “as fast as we can”.

He claimed Facebook is sharing “a tremendous amount of information” with the UK’s data protection watchdog — as it continues its (now) year-long investigation into the use of digital data for political purposes.

“I would guess we’re sharing information on this too,” he said in reference to app audit data. “I know that I personally shared a bunch of details on a variety of things we’re doing. And same with the Electoral Commission [which is investigating whether use of digital data and social media platforms broke campaign spending rules].”

In Schroepfer’s written evidence to the committee Facebook says it has unearthed some suggestive links between Cambridge Analytica/SCL and Aggegrate IQ: “In the course of our ongoing review, we also found certain billing and administration connections between SCL/Cambridge Analytica and AIQ”, it notes.

Both entities continue to deny any link exists between them, claiming they are entirely separate entities — though the former Cambridge Analytica employee turned whistleblower, Chris Wylie, has described AIQ as essentially the Canadian arm of SCL.

“The collaboration we saw was some billing and administrative contacts between the two of them, so you’d see similar people show up in each of the accounts,” said Schroepfer, when asked for more detail about what it had found, before declining to say anything else in a public setting on account of ongoing investigations — despite the committee pointing out other witnesses it has heard from have not held back on that front.

Another piece of information Facebook has included in the written evidence is the claim that it does not believe AIQ used Facebook data obtained via Kogan’s apps for targeting referendum ads — saying it used email address uploads for “many” of its ad campaigns during the referendum.

The data gathered through the TIYDL [Kogan’s thisisyourdigitallife] app did not include the email addresses of app installers or their friends. This means that AIQ could not have obtained these email addresses from the data TIYDL gathered from Facebook,” Facebook asserts. 

Schroepfer was questioned on this during the session and said that while there was some overlap in terms of individuals who had downloaded Kogan’s app and who had been in the audiences targeted by AIQ this was only 3-4% — which he claimed was statistically insignificant, based on comparing with other Facebook apps of similar popularity to Kogan’s.

“AIQ must have obtained these email addresses for British voters targeted in these campaigns from a different source,” is the company’s conclusion.

“We are investigating Mr Chancellor’s role right now”

The committee also asked several questions about Joseph Chancellor, the co-director of Kogan’s app company, GSR, who became an employee of Facebook in 2015 after he had left GSR. Its questions included what Chancellor’s exact role at Facebook is and why Kogan has been heavily criticized by the company yet his GSR co-director apparently remains gainfully employed by it.

Schroepfer initially claimed Facebook hadn’t known Chancellor was a director of GSR prior to employing him, in November 2015 — saying it had only become aware of that specific piece of his employment history in 2017.

But after a break in the hearing he ‘clarified’ this answer — adding: “In the recruiting process, people hiring him probably saw a CV and may have known he was part of GSR. Had someone known that — had we connected all the dots to when this thing happened with Mr Kogan, later on had he been mentioned in the documents that we signed with the Kogan party — no. Is it possible that someone knew about this and the right other people in the organization didn’t know about it, that is possible.”

A committee member then pressed him further. “We have evidence that shows that Facebook knew in November 2016 that Joseph Chancellor had formed the company, GSR, with Aleksandr Kogan which obviously then went on to provide the information to Cambridge Analytica. I’m very unclear as to why Facebook have taken such a very direct and critical line… with Kogan but have completely ignored Joseph Chancellor.”

At that point Schroepfer revealed Facebook is currently investigating Chancellor as a result of the data scandal.

“I understand your concern. We are investigating Mr Chancellor’s role right now,” he said. “There’s an employment investigation going on right now.

In terms of the work Chancellor is doing for Facebook, Schroepfer said he thought he had worked on VR for the company — but emphasized he has not been involved with “the platform”.

The issue of the NDA Kogan claimed Facebook had made him sign also came up. But Schroepfer counter claimed that this was not an NDA but just a “standard confidentiality clause” in the agreement to certify Kogan had deleted the Facebook data and its derivatives.

“We want him to be able to be open. We’re waiving any confidentiality there if that’s not clear from a legal standpoint,” he said later, clarifying it does not consider Kogan legally gagged.

Schroepfer also confirmed this agreement was signed with Kogan in June 2016, and said the “core commitments” were to confirm the deletion of data from himself and three others Kogan had passed it to: Former Cambridge Analytica CEO Alexander Nix; Wylie, for a company he had set up after leaving Cambridge Analytica; and Dr Michael Inzlicht from the Toronto Laboratory for Social Neuroscience (Kogan mentioned to the committee earlier this week he had also passed some of the Facebook data to a fellow academic in Canada).

Asked whether any payments had been made between Facebook and Kogan as part of the contract, Schroepfer said: “I believe there was no payment involved in this at all.”

‘Radical’ transparency is its fix for regulation

Other issues raised by the committee included why Facebook does not provide an overall control or opt-out for political advertising; why it does not offer a separate feed for ads but chooses to embed them into the Newsfeed; how and why it gathers data on non-users; the addictiveness engineered into its product; what it does about fake accounts; why it hasn’t recruited more humans to help with the “challenges” of managing content on a platform that’s scaled so large; and aspects of its approach to GDPR compliance.

On the latter, Schroepfer was queried specifically on why Facebook had decided to shift the data controller of ~1.5BN non-EU international users from Ireland to the US. On this he claimed the GDPR’s stipulation that there be a “lead regulator” conflicts with Facebook’s desire to be more responsive to local concerns in its non-EU international markets.

“US law does not have a notion of a lead regulator so the US does not become the lead regulator — it opens up the opportunity for us to have local markets have them, regions, be the lead and final regulator for the users in that area,” he claimed.

Asked whether he thinks the time has come for “robust regulation and empowerment of consumers over their information”, Schroepfer demurred that new regulation is needed to control data flowing over consumer platforms. “Whether, through regulation or not, making sure consumers have visibility, control and can access and take their information with you, I agree 100%,” he said, agreeing only to further self-regulation not to the need for new laws.

“In terms of regulation there are multiple laws and regulatory bodies that we are under the guise of right now. Obviously the GDPR is coming into effect just next month. We have been regulated in Europe by the Irish DPC whose done extensive audits of our systems over multiple years. In the US we’re regulated by the FTC, Privacy Commissioner in Canada and others. So I think the question isn’t ‘if’, the question is honestly how do we ensure the regulations and the practices achieve the goals you want. Which is consumers have safety, they have transparency, they understand how this stuff works, and they have control.

“And the details of implementing that is where all the really hard work is.”

His stock response to the committee’s concerns about divisive political ads was that Facebook believes “radical transparency” is the fix — also dropping one tidbit of extra news on that front in his written testimony by saying Facebook will roll out an authentication process for political advertisers in the UK in time for the local elections in May 2019.

Ads will also be required to be labeled as “political” and disclose who paid for the ad. And there will be a searchable archive — available for seven years — which will include the ads themselves plus some associated data (such as how many times an ad may have been seen, how much money was spent, and the kinds of people who saw it).

Collins asked Schroepfer whether Facebook’s ad transparency measures will also include “targeting data” — i.e. “will I understand not just who the advertiser was and what other adverts they’d run but why they’d chose to advertise to me”?

“I believe among the things you’ll see is spend (how much was spent on this ad); you will see who they were trying to advertise to (what is the audience they were trying to reach); and I believe you will also be able to see some basic information on how much it was viewed,” Schroepfer replied — avoiding yet another straight answer.

Facebook restricts APIs, axes old Instagram platform amidst scandals

Facebook is entering a tough transition period where it won’t take chances around data privacy in the wake of the Cambridge Analytica fiasco, CTO Mike Schroepfer tells TechCrunch. That’s why it’s moving up the shut down of part of the Instagram API. It’s significantly limiting data available from or requiring approval for access to Facebook’s Events, Groups, and Pages APIs plus Facebook Login. Facebook is also shutting down search by email or user name and changing its account recovery system after discovering malicious actors were using these to scrape people’s data. “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way” Schroepfer writes.

Instagram will immediately shut down part of its old platform API that was scheduled for deprecation on July 31st. TechCrunch first reported that developers’ Instagram apps were breaking over the weekend due to a sudden reduction in the API call limit. Instagram refused to comment, leading to developer frustration as their apps that analyze people’s followers and help them grow their audiences stopped working.

Now an Instagram spokesperson tells TechCrunch that “Effective today, Instagram is speeding up the previously announced deprecation of the Instagram API Platform” as part of Facebook’s work to protect people’s data. The APIs for follower lists, relationships, and commenting on public content will cease to function immediately. The December 11th, 2018 deprecation of public content reading APIs and the 2020 deprecation of basic profile info APIs will happen as scheduled, but it’s implemented rate limit reductions on them now.

The announcements come alongside Facebook’s admission that up to 87 million users had their data improperly attained by Cambridge Analytica, up from early estimates of 50 million. These users will see a warning atop their News Feed about what happened, what they should do, and see surfaced options for removing other apps they gave permissions to. Facebook CEO Mark Zuckerberg plans to take questions about today’s announcements during at 1:30pm Pacific conference call.

Regarding the Facebook APIs, here’s the abbreviated version of the changes and what they mean:

  • Events API will require approval for use in the future, and developers will no long be able to pull guest lists or post sto the event wall. This could break some event discovery and ticketing apps.
  • Groups API will require approval from Facebook and a Group admin, and developers won’t be able to pull member lists or the names and photos associated with posts. This will limit Group management apps to reputable developers only, and keep a single non-admin member of a closed Group from giving developers access.
  • Pages API will only be available to developers providing “useful services”, and all future access will require Facebook approval. This could heavily restrict Page management apps for scheduling posts or moderating comments.
  • Facebook Login use will require a stricter review process and apps won’t be able to pull users personal information or activity, plus they’ll lose access if after 3 months of non-use. Most login apps should still work, though, as few actually needed your religious affiliation or video watching activity, though some professional apps might not function without your work history
  • Search by phone number or email will no longer work, as Facebook says it discovered malicious actors were using them to pair one piece of information with someone’s identity, and cycling through IP addresses to avoid being blocked by Facebook. This could make it tougher for people in countries where people have similar names find each other. Of all the changes, this may be the most damaging to the user experience.
  • Account Recovery will no longer immediately show the identity of a user when someone submits their email or phone number to similarly prevent scraping. The feature will still work, but may be more confusing. Facebook believes all its users’ could have had their data scraped using the search and account recovery tricks.

Schroepfer says that Facebook’s goal is to lock things down, review everything, and then figure out which developers deserve access and whether any of the functionality should be restored. The announcements raise questions about why it took the Cambridge Analytica scandal for Facebook to take data privacy seriously. You can expect the House Energy and Commerce Committee may ask Mark Zuckerberg that when he comes to testify on April 10th.

Facebook CTO Mike Schroepfer

Facebook’s bold action to reform its APIs shows it’s willing to prioritize users above developers — at least once pushed by public backlash and internal strife. The platform whiplash could make developers apprehensive to build on Facebook in the future. But if Facebook didn’t shore up data privacy, it’d have no defense if future privacy abuses by outside developers came to light.

Schroepfer tells me Facebook is taking its responsibility super seriously and that company is upset that it allowed this situation to happen. At least he seems earnest. Last week I wrote that Facebook needd to make a significant act of contrition and humility if it wanted stabilize the sinking morale of its employees. These sweeping changes qualify, and could serve as a rallying call for Facebook’s team. Rather than sit with their heads in their hands, they have a roadmap of things to fix.

Still, given the public’s lack of understanding of APIs and platforms, it may be tough for Facebook to ever regain the trust broken by a month of savage headlines about the social network’s privacy negligence. Long-term, this souring of opinion could make users hesitant to share as much on Facebook. But given its role as a ubiquitous utility for login with your identity across the web, our compulsive desire to scroll its feed and check its notifications, and the lack of viable social networking alternatives, Facebook might see the backlash blow over eventually. Hopefully that won’t lead back to business as usual.

For more on the recent Facebook platform changes, read our other stories:

Zuckerberg refuses UK parliament summons over Fb data misuse

So much for ‘We are accountable‘; Facebook founder and CEO Mark Zuckerberg has declined a summons from a UK parliamentary committee that’s investigating how social media data is being used, and — as recent revelations suggest misused — for political ad targeting.

The DCMS committee wrote to Zuckerberg on March 20 — following newspaper reports based on interviews with a former employee of UK political consultancy, Cambridge Analytica, who revealed the company obtained Facebook data on 50 million users — calling for him to give oral evidence.

Facebook’s policy staff, Simon Milner, previously told the committee the consultancy did not have Facebook data. “They may have lots of data, but it will not be Facebook user data,” said Milner on February 8. “It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

In his letter to Zuckerberg, the chair of the committee Damian Collins accuses Facebook officials of having “consistently understated” the risk of user data being taken without users’ consent.

“It is now time that I hear from a senior Facebook executive with the sufficient authority to give an accurate account of this catastrophic failure of process,” Collins continues. “There is a strong public interest test regarding user protection.”

Regardless of rising pressure around what is now a major public scandal, Zuckerberg has declined the committee’s summons.

In a statement a Facebook spokesperson said it will be offering its CTO or chief product officer to answer questions.

“We have responded to Mr Collins and the DCMS and offered for two senior company representatives from our management team to meet with the Committee depending on timings most convenient for them. Mike Schroepfer is Chief Technology Officer and is responsible for Facebook’s technology including the company’s developer platform.  Chris Cox is Facebook’s Chief Product Officer and leads development of Facebook’s core products and features including News Feed.  Both Chris Cox and Mike Schroepfer report directly to Mark Zuckerberg and are among the longest serving senior representatives in Facebook’s 15 year history,” the spokesperson said.

Facebook declined to answer additional questions.

Collins made a statement before today’s evidence session of the DCMS committee, which is hearing from Cambridge Analytica whistleblower Chris Wylie — saying it would still like to hear from Zuckerberg, even if he isn’t able to provide evidence in person.

“We will seek to clarify with Facebook whether he is available to give evidence or not, because that wasn’t clear from our correspondence,” said Collins. “If he is available to give evidence, then we will be happy to do that either in person or by video link if that will be more convenient for him.”