Tag Archives: Federal Trade Commission

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.

The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.

Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.

The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.

Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.

In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.

The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.

The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.

Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.

The most significant language from the decision from the circuit court seems to be this:

We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”

As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.

“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”

That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.

As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.

Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.

Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.

“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”

These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Libra, Facebook’s global digital currency plan, is fuzzy on privacy, watchdogs warn

Privacy commissioners from the Americas, Europe, Africa and Australasia have put their names to a joint statement raising concerns about a lack of clarity from Facebook over how data protection safeguards will be baked into its planned cryptocurrency project, Libra.

Facebook officially unveiled its big bet to build a global digital currency using blockchain technology in June, steered by a Libra Association with Facebook as a founding member. Other founding members include payment and tech giants such as Mastercard, PayPal, Uber, Lyft, eBay, VC firms including Andreessen Horowitz, Thrive Capital and Union Square Ventures, and not-for-profits such as Kiva and Mercy Corps.

At the same time Facebook announced a new subsidiary of its own business, Calibra, which it said will create financial services for the Libra network, including offering a standalone wallet app that it expects to bake into its messaging apps, Messenger and WhatsApp, next year — raising concerns it could quickly gain a monopolistic hold over what’s being couched as an ‘open’ digital currency network, given the dominance of the associated social platforms where it intends to seed its own wallet.

In its official blog post hyping Calibra Facebook avoided any talk of how much market power it might wield via its ability to promote the wallet to its existing 2.2BN+ global users, but it did touch on privacy — writing “we’ll also take steps to protect your privacy” by claiming it would not share “account information or financial data with Facebook or any third party without customer consent”.

Except for when it admitted it would; the same paragraph states there will be “limited cases” when it may share user data. These cases will “reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra”, the blog adds. (A Calibra Customer Commitment provides little more detail than a few sample instances, such as “preventing fraud and criminal activity”.)

All of that might sound reassuring enough on the surface but Facebook has used the fuzzy notion of needing to keep its users ‘safe’ as an umbrella justification for tracking non-Facebook users across the entire mainstream Internet, for example.

So the devil really is in the granular detail of anything the company claims it will and won’t do.

Hence the lack of comprehensive details about Libra’s approach to privacy and data protection is causing professional watchdogs around the world to worry.

“As representatives of the global community of data protection and privacy enforcement authorities, collectively responsible for promoting the privacy of many millions of people around the world, we are joining together to express our shared concerns about the privacy risks posed by the Libra digital currency and infrastructure,” they write. “Other authorities and democratic lawmakers have expressed concerns about this initiative. These risks are not limited to financial privacy, since the involvement of Facebook Inc., and its expansive categories of data collection on hundreds of millions of users, raises additional concerns. Data protection authorities will also work closely with other regulators.”

Among the commissioners signing the statement is the FTC’s Rohit Chopra: One of two commissioners at the US Federal Trade Commission who dissented from the $5BN settlement order that was passed by a 3:2 vote last month

Also raising concerns about Facebook’s transparency about how Libra will comply with privacy laws and expectations in multiple jurisdictions around the world are: Canada’s privacy commissioner Daniel Therrien; the European Union’s data protection supervisor, Giovanni Buttarelli; UK Information commissioner, Elizabeth Denham; Albania’s information and data protection commissioner, Besnik Dervishi; the president of the Commission for Information Technology and Civil Liberties for Burkina Faso, Marguerite Ouedraogo Bonane; and Australia’s information and privacy commissioner, Angelene Falk.

In the joint statement — on what they describe as “global privacy expectations of the Libra network” — they write:

In today’s digital age, it is critical that organisations are transparent and accountable for their personal information handling practices. Good privacy governance and privacy by design are key enablers for innovation and protecting data – they are not mutually exclusive. To date, while Facebook and Calibra have made broad public statements about privacy, they have failed to specifically address the information handling practices that will be in place to secure and protect personal information. Additionally, given the current plans for a rapid implementation of Libra and Calibra, we are surprised and concerned that this further detail is not yet available. The involvement of Facebook Inc. as a founding member of the Libra Association has the potential to drive rapid uptake by consumers around the globe, including in countries which may not yet have data protection laws in place. Once the Libra Network goes live, it may instantly become the custodian of millions of people’s personal information. This combination of vast reserves of personal information with financial information and cryptocurrency amplifies our privacy concerns about the Libra Network’s design and data sharing arrangements.

We’ve pasted the list of questions they’re putting to the Libra Network below — which they specify is “non-exhaustive”, saying individual agencies may follow up with more “as the proposals and service offering develops”.

Among the details they’re seeking answers to is clarity on what users personal data will be used for and how users will be able to control what their data is used for.

The risk of dark patterns being used to weaken and undermine users’ privacy is another stated concern.

Where user data is shared the commissioners are also seeking clarity on the types of data and the de-identification techniques that will be used — on the latter researchers have demonstrated for years that just a handful of data points can be used to re-identify credit card users from an ‘anonymous’ data-set of transactions, for example.

Here’s the full list of questions being put to the Libra Network:

  • 1. How can global data protection and privacy enforcement authorities be confident that the Libra Network has robust measures to protect the personal information of network users? In particular, how will the Libra Network ensure that its participants will:

    • a. provide clear information about how personal information will be used (including the use of profiling and algorithms, and the sharing of personal information between members of the Libra Network and any third parties) to allow users to provide specific and informed consent where appropriate;
    • b. create privacy-protective default settings that do not use nudge techniques or “dark patterns” to encourage people to share personal data with third parties or weaken their privacy protections;
    • c. ensure that privacy control settings are prominent and easy to use;
    • d. collect and process only the minimum amount of personal information necessary to achieve the identified purpose of the product or service, and ensure the lawfulness of the processing;
    • e. ensure that all personal data is adequately protected; and
    • f. give people simple procedures for exercising their privacy rights, including deleting their accounts, and honouring their requests in a timely way.
  • 2. How will the Libra Network incorporate privacy by design principles in the development of its infrastructure?

  • 3. How will the Libra Association ensure that all processors of data within the Libra Network are identified, and are compliant with their respective data protection obligations?

  • 4. How does the Libra Network plan to undertake data protection impact assessments, and how will the Libra Network ensure these assessments are considered on an ongoing basis?

  • 5. How will the Libra Network ensure that its data protection and privacy policies, standards and controls apply consistently across the Libra Network’s operations in all jurisdictions?

  • 6. Where data is shared amongst Libra Network members:

    • a. what data elements will be involved?

    • b. to what extent will it be de-identified, and what method will be used to achieve de-identification?
      c. how will Libra Network ensure that data is not re-identified, including by use of enforceable contractual commitments with those with whom data is shared?

We’ve reached out to Facebook for comment.

Facebook ignored staff warnings about “sketchy” Cambridge Analytica in September 2015

Facebook employees tried to alert the company about the activity of Cambridge Analytica as early as September 2015, per the SEC’s complaint against the company which was published yesterday.

This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a UK parliamentary committee last year.

Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.

In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply”.

Screenshot 2019 07 25 at 11.43.17

Amid a flurry of major headlines for the company yesterday, including a $5BN FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100M to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.

This tidbit was slipped out towards the end of a lengthy blog post by Facebook general counsel Colin Stretch which focused on responding to the FTC order with promises to turn over a new leaf on privacy.

CEO Mark Zuckerberg also made no mention of the SEC settlement in his own Facebook note about what he dubbed a “historic fine”.

As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘get out of jail’ card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.

‘Historic fine’ is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.

And by nesting its disclosure of the SEC settlement inside effusive privacy-washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal which highlight ongoing inconsistencies and contradictions to put it politely.

The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.

Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service”, and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015”, adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”

Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — which, as Facebook has sought to pain it, is the ‘rogue’ developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.

This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.

Fast forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.

The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?

Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)

At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.

Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).

Screenshot 2019 07 25 at 12.02.10

What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.

More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it an “historic fine”.

We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”

We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company his company was set up to sell Facebook data to; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for ‘quantitative researcher’ on LinkedIn’s platform returns more than 177,000 results of professional who are using the descriptor in their profiles.)

Earlier this month a UK parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.

The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.

A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.

Under questioning in Congress last year, Facebook founder Zuckerberg also personally told congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.

Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.

There are more awkward details in the SEC complaint that Facebook seems keen to bury too — including that as part of a signed settlement agreement, GSR’s other co-founder Aleksandr Kogan told it in June 2016 that he had, in addition to transferring modelled personality profile data on 30M Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.

This US Facebook user data included personal information such as names, location, birthdays, gender and a sub-set of page likes.

Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…

Screenshot 2019 07 25 at 12.40.29

The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.

The UK’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.

The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.

Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.

“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.

The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.

“Despite Facebook’s suspicions about Cambridge and the red flags raised after the Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.

Facebook fails to keep Messenger Kids’ safety promise

Facebook’s messaging app for under 13s, Messenger Kids — which launched two years ago pledging a “private” chat space for kids to talk with contacts specifically approved by their parents — has run into an embarrassing safety issue.

The Verge obtained messages sent by Facebook to an unknown number of parents of users of the app informing them the company had found what it couches as “a technical error,” which allowed a friend of a child to create a group chat with them in the app which invited one or more of the second child’s parent-approved friends — i.e. without those secondary contacts having been approved by the parent of the first child.

Facebook did not make a public disclosure of the safety issue. We’ve reached out to the company with questions.

It earlier confirmed the bug to the Verge, telling it: “We recently notified some parents of Messenger Kids account users about a technical error that we detected affecting a small number of group chats. We turned off the affected chats and provided parents with additional resources on Messenger Kids and online safety.”

The issue appears to have arisen as a result of how Messenger Kids’ permissions are applied in group chat scenarios — where the multi-user chats apparently override the system of required parental approval for contacts with whom kids are chatting one on one.

But given the app’s support for group messaging, it’s pretty incredible that Facebook engineers failed to robustly enforce an additional layer of checks for friends of friends to avoid unapproved users (who could include adults) from being able to connect and chat with children.

The Verge reports that “thousands” of children were left in chats with unauthorized users as a result of the flaw.

Despite its long history of playing fast and loose with user privacy, at the launch of Messenger Kids in 2017 the then head of Facebook Messenger, David Marcus, was quick to throw shade at other apps kids might use to chat — saying: “In other apps, they can contact anyone they want or be contacted by anyone.”

Turns out Facebook’s Messenger Kids has also allowed unapproved users into chat rooms it claimed as safe spaces for kids, saying too that it had developed the app in “lockstep” with the FTC.

We’ve reached out to the FTC to ask if it will be investigating the safety breach.

Friends’ data has been something of a recurring privacy black hole for Facebook — enabling, for example, the misuse of millions of users’ personal information without their knowledge or consent as a result of the expansive permissions Facebook wrapped around it, when the now defunct political data company, Cambridge Analytica, paid a developer to harvest Facebook data to build psychographic profiles of U.S. voters.

The company is reportedly on the verge of being issued with a $5 billion penalty by the FTC related to an investigation of whether it breached earlier privacy commitments made to the regulator.

Various data protection laws govern apps that process children’s data, including the Children’s Online Privacy Protection Act (COPPA) in the U.S. and the General Data Protection Regulation (GDPR) in Europe. But while there are potential privacy issues here with the Messenger Kids flaw, given children’s data may have been shared with unauthorized third parties as a result of the “error,” the main issue of concern for parents is likely the safety risk of their children being exposed in an unsupervised video chat environment to people they have not authorized.

On that issue, current laws have less of a support framework to offer.

Although — in Europe — rising concern about a range of risks and harms kids can face when going online has led the U.K. government to seek to regulate the area.

A recently published white paper sets out its plan to regulate a broad range of online harms, including proposing a mandatory duty of care on platforms to take reasonable steps to protect users from a range of harms, such as child sexual exploitation.

Facebook reportedly gets a $5 billion slap on the wrist from the FTC

The U.S. Federal Trade Commission has reportedly agreed to end its latest probe into Facebook‘s privacy problems with a $5 billion payout.

According to The Wall Street Journal, the 3-2, party-line vote by FTC commissioners was carried by the Republican majority and will be moved to the Justice Department’s civil division to be finalized.

A $5 billion payout seems like a significant sum, but Facebook had already set aside $3 billion to cover the cost of the settlement and the company could likely make up the figure in less than a quarter of revenue (the company’s revenue for the last fiscal quarter was roughly $15 billion). Indeed, Facebook said in April that it expected to pay up to $5 billion to end the government’s probe.

The settlement will also include government restrictions on how Facebook treats user privacy, according to the Journal.

We have reached out to the FTC and Facebook for comment and will update this story when we hear back.

Ultimately, the partisan divide which held up the settlement broke down with Republican members of the commission overriding Democratic concerns for greater oversight of the social media giant.

Lawmakers have been calling consistently for greater regulatory oversight of Facebook — and even a legislative push to break up the company — since the revelation of the company’s mishandling of the private data of millions of Facebook users during the run up to the 2016 presidential election, which wound up being collected improperly by Cambridge Analytica.

Specifically the FTC was examining whether the data breach violated a 2012 consent decree which saw Facebook committing to engage in better privacy protection of user data.

Facebook’s woes didn’t end with Cambridge Analytica . The company has since been on the receiving end of a number of exposes around the use and abuse of its customers’ information and comes as calls to break up the big tech companies have only grown louder.

The settlement could also be a way for the company to buy its way out of more strict oversight as it faces investigations into its potentially anti-competitive business practices and inquiries into its launch of a new cryptocurrency — Libra — which is being touted as an electronic currency for Facebook users largely divorced from governmental monetary policy.

Potential sanctions proposed by lawmakers for the FTC were reported to include the possibility of elevating privacy oversight to the company’s board of directors and potentially the deletion of tracking data; restricting certain information collection; limiting ad targeting; and restricting the flow of user data among different Facebook business units.

The FBI, FTC and SEC are joining the Justice Department’s inquiries into Facebook’s Cambridge Analytica disclosures

An alphabet soup of federal agencies are now poring over Facebook’s disclosures and the company’s statements about its response to the improper use of its user information by the political consultancy Cambridge Analytica.

The Federal Bureau of Investigation, the Federal Trade Commission and the Securities and Exchange Commission have joined the Justice Department in examining how the personal information of 71 million Americans was distributed by Facebook and used by Cambridge Analytica, according to a Washington Post report released Monday.

According to the Post, the emphasis of the investigation has been on what Facebook disclosed about its information sharing with Cambridge Analytica and whether those disclosures correlate to the timeline that’s being established by government investigators. The fear, for Facebook, is that the government may decide that the company didn’t reveal enough to either investors or the public about the extent of the misallocation of user data. Another concern is whether the Cambridge Analytica decision violated the terms of an earlier settlement Facebook made with the Federal Trade Commission.

The redoubled efforts of so many divisions could potentially ensnare Facebook chief executive Mark Zuckerberg, who was brought before Congress with other Facebook officials to testify about the breaches. People familiar with the investigation told the Post that the officials’ testimony was being scrutinized.

In a statement, Facebook noted it had received questions from different agencies and that it was cooperating.

The Federal Trade Commission first confirmed that it was investigating Facebook in March.

Acting director Tom Pahl said at the time:

The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices.

The multiple investigations by U.S. and U.K. agencies into the ways in which Cambridge Analytica accessed and exploited data on social media users in political campaigns have already pushed the political consulting firm into bankruptcy.

It’s unlikely (read impossible) that Facebook would suffer anything like the same fate, and the company’s stock price has already recovered from whatever negative impact the scandal wrought on the social network’s market capitalization. Rather, the lingering investigations show the potential for government regulators (and lawmakers) to involve themselves in the company’s operations.

As with everything else in Washington, it’s always the cover up — never the crime.

Facebook data misuse firm snubs UK watchdog’s legal order

The company at the center of a major Facebook data misuse scandal has failed to respond to a legal order issued by the U.K.’s data protection watchdog to provide a U.S. voter with all the personal information it holds on him.

An enforcement notice was served on Cambridge Analytica affiliate SCL Elections last month and the deadline for a response passed without it providing a response today.

The enforcement order followed a complaint by the U.S. academic, professor David Carroll, that the original Subject Access Request (SAR) he made under European law seeking to obtain his personal data had not been satisfactorily fulfilled.

The academic has spent more than a year trying to obtain the data Cambridge Analytica/SCL held on him after learning the company had built psychographic profiles of U.S. voters for the 2016 presidential election, when it was working for the Trump campaign.

Speaking in front of the EU parliament’s justice, civil liberties and home affairs (LIBE) committee today, Carroll said: “We have heard nothing [from SCL in response to the ICO’s enforcement order]. So they have not respected the regulator. They have not co-operated with the regulator. They are not respecting the law, in my opinion. So that’s very troubling — because they seem to be trying to use liquidation to evade their responsibility as far as we can tell.”

While he is not a U.K. citizen, Carroll discovered his personal data had been processed in the U.K. so he decided to bring a test case under U.K. law. The ICO supported his complaint — and last month ordered Cambridge Analytica/SCL Elections to hand over everything it holds on him, warning that failure to comply with the order is a criminal offense that can carry an unlimited fine.

At the same time — and pretty much at the height of a storm of publicity around the data misuse scandal — Cambridge Analytica and SCL Elections announced insolvency proceedings, blaming what they described as “unfairly negative media coverage.”

Its Twitter account has been silent ever since. Though company directors, senior management and investors were quickly spotted attaching themselves to yet another data company. So the bankruptcy proceedings look rather more like an exit strategy to try to escape the snowballing scandal and cover any associated data trails.

There are a lot of data trails though. Back in April Facebook admitted that data on as many as 87 million of its users had been passed to Cambridge Analytica without most of the people’s knowledge or consent.

“I expected to help set precedents of data sovereignty in this case. But I did not expect to be trying to also set rules of liquidation as a way to avoid responsibility for potential data crimes,” Carroll also told the LIBE committee. “So now that this is seeming to becoming a criminal matter we are now in uncharted waters.

“I’m seeking full disclosure… so that I can evaluate if my opinions were influenced for the presidential election. I suspect that they were, I suspect that I was exposed to malicious information that was trying to [influence my vote] — whether it did is a different question.”

He added that he intends to continue to pursue a claim for full disclosure via the courts, arguing that the only way to assess whether psychographic models can successfully be matched to online profiles for the purposes of manipulating political opinions — which is what Cambridge Analytica/SCL stands accused of misusing Facebook data for — is to see how the company structured and processed the information it sucked out of Facebook’s platform.

“If the predictions of my personality are in 80-90% then we can understand that their model has the potential to affect a population — even if it’s just a tiny slice of the population. Because in the US only about 70,000 voters in three states decided the election,” he added.

What comes after Cambridge Analytica?

The LIBE committee hearing in the European Union’s parliament is the first of a series of planned sessions focused on digging into the Cambridge Analytica Facebook scandal and “setting out a way forward,” as committee chair Claude Moraes put it.

Today’s hearing took evidence from former Facebook employee turned whistleblower Sandy Parakilas; investigative journalist Carole Cadwalladr; Cambridge Analytica whistleblower Chris Wylie; and the U.K.’s ICO Elizabeth Denham, along with her deputy, James Dipple-Johnstone.

The Information Commissioner’s Office has been running a more-than-year-long investigation into political ad targeting on online platforms — which now of course encompasses the Cambridge Analytica scandal and much more besides.

Denham described it today as “unprecedented in scale” — and likely the largest investigation ever undertaken by a data protection agency in Europe.

The inquiry is looking at “exactly what data went where; from whom; and how that flowed through the system; how that data was combined with other data from other data brokers; what were the algorithms that were processed,” explained Dipple-Johnstone, who is leading the investigation for the ICO.

“We’re presently working through a huge volume — many hundreds of terabytes of data — to follow that audit trail and we’re committed to getting to the bottom of that,” he added. “We are looking at over 30 organizations as part of this investigation and the actions of dozens of key individuals. We’re investigating social media platforms, data brokers, analytics firms, political parties and campaign groups across all spectrums and academic institutions.

“We are looking at both regulatory and criminal breaches, and we are working with other regulators, EU data protection colleagues and law enforcement in the U.K. and abroad.”

He said the ICO’s report is now expected to be published at the end of this month.

Denham previously told a U.K. parliamentary committee she’s leaning toward recommending a code of conduct for the use of social media in political campaigns to avoid the risk of political uses of the technology getting ahead of the law — a point she reiterated today.

“Beyond data protection I expect my report will be relevant to other regulators overseeing electoral processes and also overseeing academic research,” she said, emphasizing that the recommendations will be relevant “well beyond the borders of the U.K.”

“What is clear is that work will need to be done to strengthen information-sharing and closer working across these areas,” she added.

Many MEPs asked the witnesses for their views on whether the EU’s new data protection framework, the GDPR, is sufficient to curb the kinds of data abuse and misuse that has been so publicly foregrounded by the Cambridge Analytica-Facebook scandal — or whether additional regulations are required?

On this Denham made a plea for GDPR to be “given some time to work.” “I think the GDPR is an important step, it’s one step but remember the GDPR is the law that’s written on paper — and what really matters now is the enforcement of the law,” she said.

“So it’s the activities that data protection authorities are willing to do. It’s the sanctions that we look at. It’s the users and the citizens who understand their rights enough to take action — because we don’t have thousands of inspectors that are going to go around and look at every system. But we do have millions of users and millions of citizens that can exercise their rights. So it’s the enforcement and the administration of the law. It’s going to take a village to change the scenario.

“You asked me if I thought this kind of activity which we’re speaking about today — involving Cambridge Analytica and Facebook — is happening on other platforms or if there’s other applications or if there’s misuse and misselling of personal data. I would say yes,” she said in response to another question from an MEP.

“Even in the political arena there are other political consultancies that are pairing up with data brokers and other data analytics companies. I think there is a lack of transparency for users across many platforms.”

Parakilas, a former Facebook platform operations manager — and the closest stand in for the company in the room — fielded many of the questions from MEPs, including being asked for suggestions for a legislative framework that “wouldn’t put breaks on the development of healthy companies” and also not be unduly burdensome on smaller companies.

He urged EU lawmakers to think about ways to incentivize a commercial ecosystem that works to encourage rather than undermine data protection and privacy, as well as ensuring regulators are properly resourced to enforce the law.

“I think the GDPR is a really important first step,” he added. “What I would say beyond that is there’s going to have to be a lot of thinking that is done about the next generation of technologies — and so while I think GDPR does a admirable job of addressing some of the issues with current technologies the stuff that’s coming is, frankly, when you think about the bad cases is terrifying.

“Things like deepfakes. The ability to create on-demand content that’s completely fabricated but looks real… Things like artificial intelligence which can predict user actions before those actions are actually done. And in fact Facebook is just one company that’s working on this — but the fact that they have a business model where they could potentially sell the ability to influence future actions using these predictions. There’s a lot of thinking that needs to be done about the frameworks for these new technologies. So I would just encourage you to engage as soon as possible on those new technologies.”

Parakilas also discussed fresh revelations related to how Facebook’s platform disseminates user data published by The New York Times at the weekend.

The newspaper’s report details how, until April, Facebook’s API was passing user and friend data to at least 60 device makers without gaining people’s consent — despite a consent decree the company struck with the Federal Trade Commission in 2011, which Parakilas suggested “appears to prohibit that kind of behavior.”

He also pointed out the device maker data-sharing “appears to contradict Facebook’s own testimony to Congress and potentially other testimony and public statements they’ve made” — given the company’s repeat claims, since the Cambridge Analytica scandal broke, that it “locked down” data-sharing on its platform in 2015.

Yet data was still flowing out to multiple device maker partners — apparently without users’ knowledge or consent.

“I think this is a very, very important developing story. And I would encourage everyone in this body to follow it closely,” he said.

Two more LIBE hearings are planned around the Cambridge Analytica scandal — one on June 25 and one on July 2 — with the latter slated to include a Facebook representative.

Mark Zuckerberg himself attended a meeting with the EU parliament’s Council of Presidents on May 22, though the format of the meeting was widely criticized for allowing the Facebook founder to cherry-pick questions he wanted to answer — and dodge those he didn’t.

MEPs pushed for Facebook to follow up with answers to their many outstanding questions — and two sets of Facebook responses have now been published by the EU parliament.

In its follow up responses the company claims, for example, that it does not create shadow profiles on non-users — saying it merely collects information on site visitors in the same way that “any website or app” might.

On the issue of compensation for EU users affected by the Cambridge Analytica scandal — something MEPs also pressed Zuckerberg on — Facebook claims it has not seen evidence that the app developer who harvested people’s data from its platform on behalf of Cambridge Analytica/SCL sold any EU users’ data to the company.

The developer, Dr. Aleksandr Kogan, had been contracted by SCL Elections for U.S.-related election work. Although his apps collected data on Facebook users from all over the world — including some 2.7 million EU citizens.

“We will conduct a forensic audit of Cambridge Analytica, which we hope to complete as soon as we are authorized by the UK’s Information Commissioner,” Facebook also writes on that.

Progressive advocacy groups call on the FTC to ‘make Facebook safe for democracy’

A team of progressive advocacy groups, including MoveOn and Demand Progress, are asking the Federal Trade Commission to “make Facebook safe for democracy.” According to Axios, the campaign, called Freedom From Facebook, is set to launch a six-figure ad campaign on Monday that will run on Facebook, Instagram and Twitter, among other platforms.

The other advocacy groups behind the campaign are Citizens Against Monopoly, Content Creators Coalition, Jewish Voice for Peace, Mpower Change, Open Markets Institute and SumOfUs. Together they are calling on the FTC to “break up Facebook’s monopoly” by forcing it to spin-off Instagram, WhatsApp and Messenger into separate, competing companies. They also want the FTC to require interoperability so users can communicate across competing social networks and strengthen privacy regulations.

Freedom From Facebook’s site also includes an online petition and privacy guide that links to FB Purity and the Electronic Frontier Foundation’s Privacy Badger, browser extensions that help users streamline their Facebook ad preferences and block online trackers, respectively.

The FTC recently gained a new chairman after President Donald Trump’s pick for the position Joseph Simons was sworn in early this month, along with four new commissioners also nominated by Trump. Simons is an antitrust lawyer who has represented large tech firms like Microsoft and Sony. The FTC is currently investigating whether or not Facebook’s involvement with Cambridge Analytica violated a previous legal agreement it had with the commission, but many people are wondering if it and other federal agencies are capable of regulating tech companies, especially after many lawmakers seemed confused about how social media works during Facebook CEO Mark Zuckerberg’s congressional hearing last month.

Despite its data privacy and regulatory issues, Facebook is still doing well from a financial perspective. Its first-quarter earnings report showed strong user growth and revenue above Wall Street’s expectations.

TechCrunch has contacted Freedom From Facebook and Facebook for comment.