Tag Archives: Mark Zuckerberg

Facebook really doesn’t want you to read these emails

Oh hey, y’all, it’s Friday! It’s August! Which means it’s a great day for Facebook to drop a little news it would prefer you don’t notice. News that you won’t find a link to on the homepage of Facebook’s Newsroom — which is replete with colorfully illustrated items it does want you to read (like the puffed up claim that “Now You Can See and Control the Data That Apps and Websites Share With Facebook”).

The blog post Facebook would really prefer you didn’t notice is tucked away in a News sub-section of this website — where it’s been confusingly entitled: Document Holds the Potential for Confusion. And has an unenticing grey image of a document icon to further put you off — just in case you happened to stumble on it after all. It’s almost as if Facebook is saying “definitely don’t click here“…

ca update grey

So what is Facebook trying to bury in the horse latitudes of summer?

An internal email chain, starting September 2015, which shows a glimpse of what Facebook’s own staff knew about the activity of Cambridge Analytica prior to The Guardian’s December 2015 scoop — when the newspaper broke the story that the controversial (and now defunct) data analytics firm, then working for Ted Cruz’s presidential campaign, had harvested data on millions of Facebook users without their knowledge and/or consent, and was using psychological insights gleaned from the data to target voters.

Facebook founder Mark Zuckerberg’s official timeline of events about what he knew when vis-à-vis the Cambridge Analytica story has always been that his knowledge of the matter dates to December 2015 — when the Guardian published its story.

But the email thread Facebook is now releasing shows internal concerns being raised almost two months earlier.

This chimes with previous (more partial) releases of internal correspondence pertaining to Cambridge Analytica  — which have also come out as a result of legal actions (and which we’ve reported on previously here and here).

If you click to download the latest release, which Facebook suggests it ‘agreed’ with the District of Columbia Attorney General to “jointly make public”, you’ll find a redacted thread of emails in which Facebook staffers raise a number of platform policy violation concerns related to the “political partner space”, writing September 29, 2015, that “many companies seem to be on the edge- possibly over”.

Cambridge Analytica is first identified by name — when it’s described by a Facebook employee as “a sketchy (to say the least) data modelling company that has penetrated our market deeply” — on September 22, 2015, per this email thread. It is one of many companies the staffer writes are suspected of scraping user data — but is also described as “the largest and most aggressive on the conservative side”.

Screenshot 2019 08 23 at 16.34.15

On September 30, 2015, a Facebook staffer responds to this, asking for App IDs and app names for the apps engaging in scraping user data — before writing: “My hunch is that these apps’ data-scraping is likely non-compliant”.

“It would be very difficult to engage in data-scraping activity as you described while still being compliant with FPPs [Facebook Platform Policies],” this person adds.

Cambridge Analytica gets another direct mention (“the Cambridge app”) on the same day. A different Facebook staffer then chips in with a view that “it’s very likely these companies are not in violation of any of our terms” — before asking for “concrete examples” and warning against calling them to ask questions unless “red flags” have been confirmed.

On October 13, a Facebook employee chips back into the thread with the view that “there are likely a few data policy violations here”.

The email thread goes on to discuss concerns related to additional political partners and agencies using Facebook’s platform at that point, including ForAmerica, Creative Response Concepts, NationBuilder and Strategic Media 21. Which perhaps explains Facebook’s lack of focus on CA — if potentially “sketchy” political activity was apparently widespread.

On December 11 another Facebook staffer writes to ask for an expedited review of Cambridge Analytica — saying it’s “unfortunately… now a PR issue”, i.e. as a result of the Guardian publishing its article.

The same day a Facebook employee emails to say Cambridge Analytica “is hi pri at this point”, adding: “We need to sort this out ASAP” — a month and a half after the initial concern was raised.

Also on December 11 a staffer writes that they had not heard of GSR, the Cambridge-based developer CA hired to extract Facebook user data, before the Guardian article named it. But other Facebook staffers chip in to reveal personal knowledge of the psychographic profiling techniques deployed by Cambridge Analytica and GSR’s Dr Aleksandr Kogan, with one writing that Kogan was their postdoc supervisor at Cambridge University.

Another says they are friends with Michal Kosinsky, the lead author of a personality modelling paper that underpins the technique used by CA to try to manipulate voters — which they described as “solid science”.

A different staffer also flags the possibility that Facebook has worked with Kogan — ironically enough “on research on the Protect & Care team” — citing the “Wait, What thread” and another email, neither of which appear to have been released by Facebook in this ‘Exhibit 1’ bundle.

So we can only speculate on whether Facebook’s decision — around September 2015 — to hire Kogan’s GSR co-founder, Joseph Chancellor, appears as a discussion item in the ‘Wait, What’ thread…

Putting its own spin on the release of these internal emails in a blog post, Facebook sticks to its prior line that “unconfirmed reports of scraping” and “policy violations by Aleksandr Kogan” are two separate issues, writing:

We believe this document has the potential to confuse two different events surrounding our knowledge of Cambridge Analytica. There is no substantively new information in this document and the issues have been previously reported. As we have said many times, including last week to a British parliamentary committee, these are two distinct issues. One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica. This document proves the issues are separate; conflating them has the potential to mislead people.

It has previously also referred to the internal concerns raised about CA as “rumors”.

“Facebook was not aware that Kogan sold data to Cambridge Analytica until December 2015. That is a fact that we have testified to under oath, that we have described to our core regulators, and that we stand by today,” it adds now.

It also claims that after an engineer responded to concerns that CA was scraping data and looked into it they were not able to find any such evidence. “Even if such a report had been confirmed, such incidents would not naturally indicate the scale of the misconduct that Kogan had engaged in,” Facebook adds.

The company has sought to dismiss the privacy litigation brought against it by the District of Columbia which is related to the Cambridge Analytica scandal — but has been unsuccessful in derailing the case thus far.

The DC complaint alleges that Facebook allowed third-party developers to access consumers’ personal data, including information on their online behavior, in order to offer apps on its platform, and that it failed to effectively oversee and enforce its platform policies by not taking reasonable steps to protect consumer data and privacy. It also alleges Facebook failed to inform users of the CA breach.

Facebook has also failed to block another similar lawsuit that’s been filed in Washington, DC by Attorney General Karl Racine — which has alleged lax oversight and misleading privacy standards.

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Facebook to admit ownership of Instagram, WhatsApp in hard-to-read small-print

For the first time in more than half a decade, Facebook wants to inform you that it owns Instagram, the hyper-popular rival social networking app it acquired for a $1BN steal back in 2012.

Ditto messaging platform WhatsApp — which Mark Zuckerberg splurged $19BN on a couple of years later to keep feeding eyeballs into his growth engine.

Facebook is adding its own brand name alongside the other two — in the following format: ‘Instagram from Facebook’; ‘WhatsApp from Facebook.’

The cheap perfume style rebranding was first reported by The Information which cites three people familiar with the matter who told it employees for the two apps were recently notified internally of the plan to rebrand.

“The move to add Facebook’s name to the apps has been met with surprise and confusion internally, reflecting the autonomy that the units have operated under,” it said. Although it also reported that CEO Mark Zuckerberg has also been frustrated that Facebook doesn’t get more credit for the growth of Instagram and WhatsApp.

So it sounds like Facebook may be hoping for a little reverse osmosis brand-washing — aka leveraging the popularity of its cleaner social apps to detoxify the scandal-hit mothership.

Not that Facebook is saying anything like that publicly, of course.

In a statement to The Information confirming the rebranding it explained it thus: “We want to be clearer about the products and services that are part of Facebook.”

The rebranding also comes at a time when Facebook is facing at least two antitrust investigations on its home turf — where calls for Facebook and other big tech giants to be broken up are now a regular feature of the campaign trail…

We can only surmise the legal advice Facebook must be receiving vis-a-vis what it should do to try to close down break up arguments that could deprive it of its pair of golden growth geese.

Arguments such as the fact most Instagram (and WhatsApp) users don’t even know they’re using a Facebook-owned app. Hence, as things stand, it would be pretty difficult for Facebook’s lawyers to successfully argue Instagram and WhatsApp users would be harmed if the apps were cut free by a break-up order.

But now — with the clumsy ‘from Facebook’ construction — Facebook can at least try to make a case that users are in a knowing relationship with Facebook in which they willingly, even if not lovingly, place their eyeballs in Zuckerberg’s bucket.

In which case Facebook is not telling you the Instagram user that it owns Instagram for your benefit. Not even slightly.

Note, for example, the use of the comparative adjective “clearer” in Facebook’s statement to explain its intent for the rebranding — rather than a simple statement: ‘we want to be clear’.

It’s definitely not saying it’s going to individually broadcast its ownership of Instagram and WhatsApp to each and every user on those networks. More like it’s going to try to creep the Facebook brand in. Which is far more in corporate character.

At the time of writing a five day old update of of Instagram’s iOS app already features the new construction — although it looks far more dark pattern than splashy rebrand, with just the faintest whisker of grey text at the base of the screen to disclose that you’re about to be sucked into the Facebook empire (vs a giant big blue ‘Create new account’ button winking to be tapped up top… )

Here’s the landing screen — with the new branding. Blink and you’ll miss it…

image2

So not full disclosure then. More like just an easily overlooked dab of the legal stuff — to try to manage antitrust risk vs the risk of Facebook brand toxicity poisoning the (cleaner) wells of Instagram and WhatsApp.

There are signs the company is experimenting in some extremely dilute cross-brand-washing too.

The iOS app description for Instagram includes the new branding — tagged to an ad style slogan that gushes: “Bringing you closer to the people and things you love.”  But, frankly, who reads app descriptions?

image1

Up until pretty recently, both Instagram and WhatsApp had a degree of independence from their rapacious corporate parent — granted brand and operational independence under the original acquisition terms and leadership of their original founders.

Not any more, though. Instagram’s founders cleared out last year. While WhatsApp’s jumped ship between 2017 and 2018.

Zuckerberg lieutenants and/or long time Facebookers are now running both app businesses. The takeover is complete.

Facebook is also busy working on entangling the backends of its three networks — under a claimed ‘pivot to privacy‘ which it announced earlier this year.

This also appears intended to try to put regulators off by making breaking up Facebook much harder than it would be if you could just split it along existing app lines. Theories of user harm potentially get more complicated if you can demonstrate cross-platform chatter.

The accompanying 3,000+ word screed from Zuckerberg introduced the singular notion of “the Facebook network”; aka one pool for users to splash in, three differently colored slides to funnel you in there.

“In a few years, I expect future versions of Messenger and WhatsApp to become the main ways people communicate on the Facebook network,” he wrote. “If this evolution is successful, interacting with your friends and family across the Facebook network will become a fundamentally more private experience.”

The ‘from Facebook’ rebranding thus looks like just a little light covering fire for the really grand dodge Facebook is hoping to pull off as the break-up bullet speeds down the pipe: Aka Entangling its core businesses at the infrastructure level.

From three networks to one massive Facebook-owned user data pool. 

One network to rule them all, one network to find them,
One network to bring them all, and in the regulatory darkness bind them

Europe’s top court sharpens guidance for sites using leaky social plug-ins

Europe’s top court has made a ruling that could affect scores of websites that embed the Facebook ‘Like’ button and receive visitors from the region.

The ruling by the Court of Justice of the EU states such sites are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

The ruling is significant because, as currently seems to be the case, Facebook’s Like buttons transfer personal data automatically, when a webpage loads — without the user even needing to interact with the plug-in — which means if websites are relying on visitors’ ‘consenting’ to their data being shared with Facebook they will likely need to change how the plug-in functions to ensure no data is sent to Facebook prior to visitors being asked if they want their browsing to be tracked by the adtech giant.

The background to the case is a complaint against online clothes retailer, Fashion ID, by a German consumer protection association, Verbraucherzentrale NRW — which took legal action in 2015 seeking an injunction against Fashion ID’s use of the plug-in which it claimed breached European data protection law.

Like ’em or loath ’em, Facebook’s ‘Like’ buttons are an impossible-to-miss component of the mainstream web. Though most Internet users are likely unaware that the social plug-ins are used by Facebook to track what other websites they’re visiting for ad targeting purposes.

Last year the company told the UK parliament that between April 9 and April 16 the button had appeared on 8.4M websites, while its Share button social plug-in appeared on 931K sites. (Facebook also admitted to 2.2M instances of another tracking tool it uses to harvest non-Facebook browsing activity — called a Facebook Pixel — being invisibly embedded on third party websites.)

The Fashion ID case predates the introduction of the EU’s updated privacy framework, GDPR, which further toughens the rules around obtaining consent — meaning it must be purpose specific, informed and freely given.

Today’s CJEU decision also follows another ruling a year ago, in a case related to Facebook fan pages, when the court took a broad view of privacy responsibilities around platforms — saying both fan page administrators and host platforms could be data controllers. Though it also said joint controllership does not necessarily imply equal responsibility for each party.

In the latest decision the CJEU has sought to draw some limits on the scope of joint responsibility, finding that a website where the Facebook Like button is embedded cannot be considered a data controller for any subsequent processing, i.e. after the data has been transmitted to Facebook Ireland (the data controller for Facebook’s European users).

The joint responsibility specifically covers the collection and transmission of Facebook Like data to Facebook Ireland.

“It seems, at the outset, impossible that Fashion ID determines the purposes and means of those operations,” the court writes in a press release announcing the decision.

“By contrast, Fashion ID can be considered to be a controller jointly with Facebook Ireland in respect of the operations involving the collection and disclosure by transmission to Facebook Ireland of the data at issue, since it can be concluded (subject to the investigations that it is for the Oberlandesgericht Düsseldorf [German regional court] to carry out) that Fashion ID and Facebook Ireland determine jointly the means and purposes of those operations.”

Responding the judgement in a statement attributed to its associate general counsel, Jack Gilbert, Facebook told us:

Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools. We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.

The company said it may make changes to the Like button to ensure websites that use it are able to comply with Europe’s GDPR.

Though it’s not clear what specific changes these could be, such as — for example — whether Facebook will change the code of its social plug-ins to ensure no data is transferred at the point a page loads. (We’ve asked Facebook and will update this report with any response.)

Facebook also points out that other tech giants, such as Twitter and LinkedIn, deploy similar social plug-ins — suggesting the CJEU ruling will apply to other social platforms, as well as to thousands of websites across the EU where these sorts of plug-ins crop up.

“Sites with the button should make sure that they are sufficiently transparent to site visitors, and must make sure that they have a lawful basis for the transfer of the user’s personal data (e.g. if just the user’s IP address and other data stored on the user’s device by Facebook cookies) to Facebook,” Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, told TechCrunch.

“If their lawful basis is consent, then they’ll need to get consent before deploying the button for it to be valid — otherwise, they’ll have done the transfer before the visitor has consented

“If relying on legitimate interests — which might scrape by — then they’ll need to have done a legitimate interests assessment, and kept it on file (against the (admittedly unlikely) day that a regulator asks to see it), and they’ll need to have a mechanism by which a site visitor can object to the transfer.”

“Basically, if organisations are taking on board the recent guidance from the ICO and CNIL on cookie compliance, wrapping in Facebook ‘Like’ and other similar things in with that work would be sensible,” Brown added.

Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law who has been following the case, said the court has not clarified what interests may be considered ‘legitimate’ in this context — only that both the website operator and the plug-in provider must pursue a legitimate interest.

“After today’s judgment, all website operators that insert third-party plug-ins (such as Facebook ‘Like’ buttons) in their websites should carefully reassess their compliance with EU data protection law,” he agreed. “In particular, they should verify whether their privacy policies cover data processing operations involving the collection and transmission of visitors’ personal data by means of third-party plug-ins. Many of today’s policies are unlikely to cover such operations.

Website operators should also assess what is the appropriate legal basis for the collection and transmission of personal data by means of the plug-ins embedded in their websites, and if consent applies, they should ensure that they obtain the user’s consent before the data collection takes place, which may often prove challenging in practice.  In this regard, the use of pre-ticked checkboxes is not advisable, as it tends to be considered insufficient to fulfil the criteria for valid consent under European data protection law.”

Also commenting on the judgement, Michael Veale, a UK-based researcher in tech and privacy law/policy, said it raises questions about how Facebook will comply with Europe’s data protection framework for any further processing it carries out of the social plug-in data.

“The whole judgement to me leaves open the question ‘on what grounds can Facebook justify further processing of data from their web tracking code?’” he told us. “If they have to provide transparency for this further processing, which would take them out of joint controllership into sole controllership, to whom and when is it provided?

“If they have to demonstrate they would win a legitimate interests test, how will that be affected by the difficulty in delivering that transparency to data subjects?’

“Can Facebook do a backflip and say that for users of their service, their terms of service on their platform justifies the further use of data for which individuals must have separately been made aware of by the website where it was collected?

“The question then quite clearly boils down to non-users, or to users who are effectively non-users to Facebook through effective use of technologies such as Mozilla’s browser tab isolation.”

How far a tracking pixel could be considered a ‘similar device’ to a cookie is another question to consider, he said.

The tracking of non-Facebook users via social plug-ins certainly continues to be a hot-button legal issue for Facebook in Europe — where the company has twice lost in court to Belgium’s privacy watchdog on this issue. (Facebook has continued to appeal.)

Facebook founder Mark Zuckerberg also faced questions about tracking non-users last year, from MEPs in the European Parliament — who pressed him on whether Facebook uses data on non-users for any other uses vs the security purpose of “keeping bad content out” that he claimed requires Facebook to track everyone on the mainstream Internet.

MEPs also wanted to know how non-users can stop their data being transferred to Facebook? Zuckerberg gave no answer, likely because there’s currently no way for non-users to stop their data being sucked up by Facebook’s servers — short of staying off the mainstream Internet.

This report was updated with additional comment 

Facebook ignored staff warnings about “sketchy” Cambridge Analytica in September 2015

Facebook employees tried to alert the company about the activity of Cambridge Analytica as early as September 2015, per the SEC’s complaint against the company which was published yesterday.

This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a UK parliamentary committee last year.

Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.

In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply”.

Screenshot 2019 07 25 at 11.43.17

Amid a flurry of major headlines for the company yesterday, including a $5BN FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100M to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.

This tidbit was slipped out towards the end of a lengthy blog post by Facebook general counsel Colin Stretch which focused on responding to the FTC order with promises to turn over a new leaf on privacy.

CEO Mark Zuckerberg also made no mention of the SEC settlement in his own Facebook note about what he dubbed a “historic fine”.

As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘get out of jail’ card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.

‘Historic fine’ is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.

And by nesting its disclosure of the SEC settlement inside effusive privacy-washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal which highlight ongoing inconsistencies and contradictions to put it politely.

The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.

Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service”, and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015”, adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”

Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — which, as Facebook has sought to pain it, is the ‘rogue’ developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.

This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.

Fast forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.

The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?

Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)

At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.

Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).

Screenshot 2019 07 25 at 12.02.10

What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.

More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it an “historic fine”.

We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”

We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company his company was set up to sell Facebook data to; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for ‘quantitative researcher’ on LinkedIn’s platform returns more than 177,000 results of professional who are using the descriptor in their profiles.)

Earlier this month a UK parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.

The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.

A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.

Under questioning in Congress last year, Facebook founder Zuckerberg also personally told congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.

Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.

There are more awkward details in the SEC complaint that Facebook seems keen to bury too — including that as part of a signed settlement agreement, GSR’s other co-founder Aleksandr Kogan told it in June 2016 that he had, in addition to transferring modelled personality profile data on 30M Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.

This US Facebook user data included personal information such as names, location, birthdays, gender and a sub-set of page likes.

Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…

Screenshot 2019 07 25 at 12.40.29

The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.

The UK’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.

The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.

Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.

“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.

The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.

“Despite Facebook’s suspicions about Cambridge and the red flags raised after the Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.

Adopting a ratings system for social media like the ones used for film and TV won’t work

Internet platforms like Google, Facebook and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.

Facebook’s content oversight board plan is raising more questions than it answers

Facebook has produced a report summarizing feedback it’s taken in on its idea of establishing a content oversight board to help arbitrate on moderation decisions.

Aka the ‘supreme court of Facebook’ concept first discussed by founder Mark Zuckerberg last year, when he told Vox:

[O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Facebook has since suggested the oversight board will be up and running later this year. And has just wheeled out its global head of policy and spin for a European PR push to convince regional governments to give it room for self-regulation 2.0, rather than slapping it with broadcast-style regulations.

The latest report, which follows a draft charter unveiled in January, rounds up input fed to Facebook via six “in-depth” workshops and 22 roundtables convened by Facebook and held in locations of its choosing around the world.

In all, Facebook says the events were attended by 650+ people from 88 different countries — though it further qualifies that by saying it had “personal discussions” with more than 250 people and received more than 1,200 public consultation submissions.

“In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed,” Facebook writes.

It goes without saying that this input represents a minuscule fraction of the actual ‘population’ of Facebook’s eponymous platform, which now exceeds 2.2BN accounts (an unknown portion of which will be fake/duplicates), while its operations stretch to more than double the number of markets represented by individuals at the events.

The feedback exercise — as indeed the concept of the board itself — is inevitably an exercise in opinion abstraction. Which gives Facebook leeway to shape the output as it prefers. (And, indeed, the full report notes that “some found this public consultation ‘not nearly iterative enough, nor transparent enough, to provide any legitimacy’ to the process of creating the Board”.)

In a blog post providing its spin on the “global feedback and input”, Facebook culls three “general themes” it claims emerged from the various discussions and submissions — namely that: 

  • People want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties, writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists
  • How the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public are key considerations — though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed
  • People also want a board that’s “as diverse as the many people on Facebook and Instagram” — the problem being that’s clearly impossible, given the planet-spanning size of Facebook platforms. Another desire Facebook highlights is for the board to be able to encourage it to make “better, more transparent decisions”. The need for board decisions (and indeed decisions Facebook takes when setting up the board) to be transparent emerges as a major theme in the report. In terms of the board’s make-up, Facebook says it should comprise experts with different backgrounds, different disciplines, and different viewpoints — “who can all represent the interests of a global community”. Though there’s clearly going to be differing views on how or even whether that’s possible to achieve; and therefore questions over how a 40-odd member body, that will likely rarely sit in plenary, can plausibly act as an prism for Facebook’s user-base

The report is worth reading in full to get a sense of the broad spectrum of governance questions and conundrums Facebook is here wading into.

If, as it very much looks, this is a Facebook-configured exercise in blame spreading for the problems its platform hosts, the surface area for disagreement and dispute will clearly be massive — and from the company’s point of view that already looks like a win. Given how, since 2016, Facebook (and Zuckerberg) have been the conduit for so much public and political anger linked to the spreading and accelerating of harmful online content.

Differing opinions and will also provide cover for Facebook to justify starting “narrow”. Which it has said it will do with the board, aiming to have something up and running by the end of this year. But that just means it’ll be managing expectations of how little actual oversight will flow right from the very start.

The report also shows that Facebook’s claimed ‘listening ear’ for a “global perspective” has some very hard limits.

So while those involved in the consultation are reported to have repeatedly suggested the oversight board should not just be limited to content judgement — but should also be able to make binding decisions related to things like Facebook’s newsfeed algorithm or wider use of AI by the company — Facebook works to shut those suggestions down, underscoring the scope of the oversight will be limited to content.

“The subtitle of the Draft Charter — “An Oversight Board for Content Decisions” — made clear that this body would focus specifically on content. In this regard, Facebook has been relatively clear about the Board’s scope and remit,” it writes. “However, throughout the consultation period, interlocutors often proposed that the Board hear a wide range of controversial and emerging issues: newsfeed ranking, data privacy, issues of local law, artificial intelligence, advertising policies, and so on.”

It goes on to admit that “the question persisted: should the Board be restricted to content decisions only, without much real influence over policy?” — before picking a selection of responses that appear intended to fuzz the issue, allowing it to position itself as seeking a reasoned middle ground.

“In the end, balance will be needed; Facebook will need to resolve tensions between minimalist and maximalist visions of the Board,” it concludes. “Above all, it will have to demonstrate that the Oversight Board — as an enterprise worth doing — adds value, is relevant, and represents a step forward from content governance as it stands today.”

Sample cases the report suggests the board could review — as suggested by participants in Facebook’s consultation — include:

  • A user shared a list of men working in academia, who were accused of engaging in inappropriate behavior and/or abuse, including unwanted sexual advances;
  • A Page that commonly uses memes and other forms of satire shared posts that used discriminatory remarks to describe a particular demographic group in India;
  • A candidate for office made strong, disparaging remarks to an unknown passerby regarding their gender identity and livestreamed the interaction. Other users reported this due to safety concerns for the latter person;
  • A government official suggested that a local minority group needed to be cautious, comparing that group’s behavior to that of other groups that have faced genocide

So, again, it’s easy to see the kinds of controversies and indeed criticisms that individuals sitting on Facebook’s board will be opening themselves up to — whichever way their decisions fall.

A content review board that will inevitably remain linked to (if not also reimbursed via) the company that establishes it, and will not be granted powers to set wider Facebook policy — but will instead be tasked with facing the impossible of trying to please all of the Facebook users (and critics) all of the time — does certainly risk looking like Facebook’s stooge; a conduit for channeling dirty and political content problems that have the potential to go viral and threaten its continued ability to monetize the stuff that’s uploaded to its platforms.

Facebook’s preferred choice of phrase to describe its users — “global community” — is a tellingly flat one in this regard.

The company conspicuously avoids talk of communities, pluralinstead the closest we get here is a claim that its selective consultation exercise is “ensuring a global perspective”, as if a singular essence can somehow be distilled from a non-representative sample of human opinion — when in fact the stuff that flows across its platforms is quite the opposite; multitudes of perspectives from individuals and communities whose shared use of Facebook does not an emergent ‘global community’ make.

This is why Facebook has struggled to impose a single set of ‘community standards’ across a platform that spans so many contexts; a one-size-fits all approach very clearly doesn’t fit.

Yet it’s not at all clear how Facebook creating yet another layer of content review changes anything much for that challenge — unless the oversight body is mostly intended to act as a human shield for the company itself, putting a firewall between it and certain highly controversial content; aka Facebook’s supreme court of taking the blame on its behalf.

Just one of the difficult content moderation issues embedded in the businesses of sociotechnical, planet-spanning social media platform giants like Facebook — hate speech — defies a top-down ‘global’ fix.

As Evelyn Douek wrote last year vis-a-via hate speech on the Lawfare blog, after Zuckerberg had floated the idea of a governance structure for online speech: “Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent… This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second.”

A cynic would say Facebook knows it can’t fix planet-scale content moderation and still turn a profit. So it needs a way to distract attention and shift blame.

If it can get enough outsiders to buy into its oversight board — allowing it to pass off the oxymoron of “global governance”, via whatever self-styled structure it allows to emerge from these self-regulatory seeds — the company’s hope must be that the device also works as a bolster against political pressure.

Both over particular problem/controversial content, and also as a vehicle to shrink the space for governments to regulate Facebook.

In a video discussion also embedded in Facebook’s blog post — in which Zuckerberg couches the oversight board project as “a big experiment that we hope can pioneer a new model for the governance of speech on the Internet” — the Facebook founder also makes reference to calls he’s made for more regulation of the Internet. As he does so he immediately qualifies the statement by blending state regulation with industry self-regulation — saying the kind of regulation he’s asking for is “in some cases by democratic process, in other cases through independent industry process”.

So Zuckerberg is making a clear pitch to position Facebook as above the rule of nation state law — and setting up a “global governance” layer is the self-serving vehicle of choice for the company to try and overtake democracy.

Even if Facebook’s oversight board’s structure is so cunningly fashioned as to present to a rationally minded individual as, in some senses, ‘independent’ from Facebook, its entire being and function will remain dependent on Facebook’s continued existence.

Whereas if individual markets impose their own statutory regulations on Internet platforms, based on democratic and societal principles, Facebook will have no control over the rules they impose, direct or otherwise — with uncontrolled compliance costs falling on its business.

It’s easy to see which model sits most easily with Zuckerberg the businessman — a man who has also demonstrated he will not be held personally accountable for what happens on his platform.

Not when he’s asked by one (non-US) parliament, nor even by representatives from nine parliaments — all keen to discuss the societal fallouts of political disinformation and hate speech spread and accelerated on Facebook.

Turns out that’s not the kind of ‘global perspective’ Facebook wants to sell you.

‘This is Your Life in Silicon Valley’: Former Pinterest President, Moment CEO Tim Kendall on Smartphone Addiction

Welcome to this week’s transcribed edition of This is Your Life in Silicon Valley. We’re running an experiment for Extra Crunch members that puts This is Your Life in Silicon Valley in words – so you can read from wherever you are.

This is Your Life in Silicon Valley was originally started by Sunil Rajaraman and Jascha Kaykas-Wolff in 2018. Rajaraman is a serial entrepreneur and writer (Co-Founded Scripted.com, and is currently an EIR at Foundation Capital), Kaykas-Wolff is the current CMO at Mozilla and ran marketing at BitTorrent. Rajaraman and Kaykas-Wolff started the podcast after a series of blog posts that Sunil wrote for The Bold Italic went viral.

The goal of the podcast is to cover issues at the intersection of technology and culture – sharing a different perspective of life in the Bay Area. Their guests include entrepreneurs like Sam Lessin, journalists like Kara Swisher and politicians like Mayor Libby Schaaf and local business owners like David White of Flour + Water.

This week’s edition of This is Your Life in Silicon Valley features Tim Kendall, the former President of Pinterest and current CEO of Moment. Tim ran monetization at Facebook, and has very strong opinions on smartphone addiction and what it is doing to all of us. Tim is an architect of much of the modern social media monetization machinery, so you definitely do not want to miss his perspective on this important subject.

For access to the full transcription, become a member of Extra Crunch. Learn more and try it for free. 

Sunil Rajaraman: Welcome to season three of This is Your Life in Silicon Valley. A Podcast about the Bay Area, technology, and culture. I’m your host, Sunil Rajaraman and I’m joined by my cohost, Jascha Kaykas-Wolff.

Jascha Kaykas-Wolff: Are you recording?

Rajaraman: I’m recording.

Kaykas-Wolff: I’m almost done. My phone’s been buzzing all afternoon and I just have to finish this text message.

Rajaraman: So you’re one of those people who can’t go five seconds without checking their phone.

Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.