Tag Archives: facebook

Facebook really doesn’t want you to read these emails

Oh hey, y’all, it’s Friday! It’s August! Which means it’s a great day for Facebook to drop a little news it would prefer you don’t notice. News that you won’t find a link to on the homepage of Facebook’s Newsroom — which is replete with colorfully illustrated items it does want you to read (like the puffed up claim that “Now You Can See and Control the Data That Apps and Websites Share With Facebook”).

The blog post Facebook would really prefer you didn’t notice is tucked away in a News sub-section of this website — where it’s been confusingly entitled: Document Holds the Potential for Confusion. And has an unenticing grey image of a document icon to further put you off — just in case you happened to stumble on it after all. It’s almost as if Facebook is saying “definitely don’t click here“…

ca update grey

So what is Facebook trying to bury in the horse latitudes of summer?

An internal email chain, starting September 2015, which shows a glimpse of what Facebook’s own staff knew about the activity of Cambridge Analytica prior to The Guardian’s December 2015 scoop — when the newspaper broke the story that the controversial (and now defunct) data analytics firm, then working for Ted Cruz’s presidential campaign, had harvested data on millions of Facebook users without their knowledge and/or consent, and was using psychological insights gleaned from the data to target voters.

Facebook founder Mark Zuckerberg’s official timeline of events about what he knew when vis-à-vis the Cambridge Analytica story has always been that his knowledge of the matter dates to December 2015 — when the Guardian published its story.

But the email thread Facebook is now releasing shows internal concerns being raised almost two months earlier.

This chimes with previous (more partial) releases of internal correspondence pertaining to Cambridge Analytica  — which have also come out as a result of legal actions (and which we’ve reported on previously here and here).

If you click to download the latest release, which Facebook suggests it ‘agreed’ with the District of Columbia Attorney General to “jointly make public”, you’ll find a redacted thread of emails in which Facebook staffers raise a number of platform policy violation concerns related to the “political partner space”, writing September 29, 2015, that “many companies seem to be on the edge- possibly over”.

Cambridge Analytica is first identified by name — when it’s described by a Facebook employee as “a sketchy (to say the least) data modelling company that has penetrated our market deeply” — on September 22, 2015, per this email thread. It is one of many companies the staffer writes are suspected of scraping user data — but is also described as “the largest and most aggressive on the conservative side”.

Screenshot 2019 08 23 at 16.34.15

On September 30, 2015, a Facebook staffer responds to this, asking for App IDs and app names for the apps engaging in scraping user data — before writing: “My hunch is that these apps’ data-scraping is likely non-compliant”.

“It would be very difficult to engage in data-scraping activity as you described while still being compliant with FPPs [Facebook Platform Policies],” this person adds.

Cambridge Analytica gets another direct mention (“the Cambridge app”) on the same day. A different Facebook staffer then chips in with a view that “it’s very likely these companies are not in violation of any of our terms” — before asking for “concrete examples” and warning against calling them to ask questions unless “red flags” have been confirmed.

On October 13, a Facebook employee chips back into the thread with the view that “there are likely a few data policy violations here”.

The email thread goes on to discuss concerns related to additional political partners and agencies using Facebook’s platform at that point, including ForAmerica, Creative Response Concepts, NationBuilder and Strategic Media 21. Which perhaps explains Facebook’s lack of focus on CA — if potentially “sketchy” political activity was apparently widespread.

On December 11 another Facebook staffer writes to ask for an expedited review of Cambridge Analytica — saying it’s “unfortunately… now a PR issue”, i.e. as a result of the Guardian publishing its article.

The same day a Facebook employee emails to say Cambridge Analytica “is hi pri at this point”, adding: “We need to sort this out ASAP” — a month and a half after the initial concern was raised.

Also on December 11 a staffer writes that they had not heard of GSR, the Cambridge-based developer CA hired to extract Facebook user data, before the Guardian article named it. But other Facebook staffers chip in to reveal personal knowledge of the psychographic profiling techniques deployed by Cambridge Analytica and GSR’s Dr Aleksandr Kogan, with one writing that Kogan was their postdoc supervisor at Cambridge University.

Another says they are friends with Michal Kosinsky, the lead author of a personality modelling paper that underpins the technique used by CA to try to manipulate voters — which they described as “solid science”.

A different staffer also flags the possibility that Facebook has worked with Kogan — ironically enough “on research on the Protect & Care team” — citing the “Wait, What thread” and another email, neither of which appear to have been released by Facebook in this ‘Exhibit 1’ bundle.

So we can only speculate on whether Facebook’s decision — around September 2015 — to hire Kogan’s GSR co-founder, Joseph Chancellor, appears as a discussion item in the ‘Wait, What’ thread…

Putting its own spin on the release of these internal emails in a blog post, Facebook sticks to its prior line that “unconfirmed reports of scraping” and “policy violations by Aleksandr Kogan” are two separate issues, writing:

We believe this document has the potential to confuse two different events surrounding our knowledge of Cambridge Analytica. There is no substantively new information in this document and the issues have been previously reported. As we have said many times, including last week to a British parliamentary committee, these are two distinct issues. One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica. This document proves the issues are separate; conflating them has the potential to mislead people.

It has previously also referred to the internal concerns raised about CA as “rumors”.

“Facebook was not aware that Kogan sold data to Cambridge Analytica until December 2015. That is a fact that we have testified to under oath, that we have described to our core regulators, and that we stand by today,” it adds now.

It also claims that after an engineer responded to concerns that CA was scraping data and looked into it they were not able to find any such evidence. “Even if such a report had been confirmed, such incidents would not naturally indicate the scale of the misconduct that Kogan had engaged in,” Facebook adds.

The company has sought to dismiss the privacy litigation brought against it by the District of Columbia which is related to the Cambridge Analytica scandal — but has been unsuccessful in derailing the case thus far.

The DC complaint alleges that Facebook allowed third-party developers to access consumers’ personal data, including information on their online behavior, in order to offer apps on its platform, and that it failed to effectively oversee and enforce its platform policies by not taking reasonable steps to protect consumer data and privacy. It also alleges Facebook failed to inform users of the CA breach.

Facebook has also failed to block another similar lawsuit that’s been filed in Washington, DC by Attorney General Karl Racine — which has alleged lax oversight and misleading privacy standards.

Facebook unveils new tools to control how websites share your data for ad-targeting

Last year, Facebook CEO Mark Zuckerberg announced that the company would be creating a “Clear History” feature that deletes the data that third-party websites and apps share with Facebook. Today, the company is actually launching that feature in select geographies.

It’s gotten a new name in the meantime: Off-Facebook Activity. David Baser, the director of product management leading Facebook’s privacy and data use team, told me the name should make it clear to everyone “exactly what kind of data” is being revealed here.

In a demo video, Baser showed me how a user could bring up a list of everyone sending data to Facebook, and then tap on a specific app or website to learn what data is being shared. If you decide that you don’t like this data-sharing, you can block it, either on a website and app level, or across-the-board.

Facebook has of course been facing greater scrutiny over data-sharing over the past couple of years, thanks to the Cambridge Analytica scandal. This, along with concerns about misinformation spreading on the platform, has led the company to launch a number of new transparency tools around advertising and content.

In this case, Facebook isn’t deleting the data that a third party might have collected about your behavior. Instead, it’s removing the connection between that data and your personal information on Facebook (any old data associated with an account is deleted, as well).

Baser said that disconnecting your off-Facebook activity will have the immediate effect of logging you out of any website or app where you used your Facebook login. More broadly, he argued that maintaining this connection benefits both consumers and businesses, because it leads to more relevant advertising — if you were looking at a specific type of shoe on a retailer’s website, Facebook could then show you ads for those shoes.

Still, Baser said, “We at Facebook want people to know this is happening.” So it’s not hiding these options away deep within a hidden menu, but making them accessible from the main settings page.

He also suggested that no other company has tried to create this kind of “comprehensive surface” for letting users control their data, so Facebook wanted to figure out the right approach that wouldn’t overwhelm or confuse users. For example, he said, “Every single aspect of this product follows the principle of progressive disclosure” — so you get with a high-level overview at first, but can see more information as you move deeper into the tools.

Facebook says it worked with privacy experts to develop this feature — and behind the scenes, it had to change the way it stores this data to make it viewable and controllable by users.

I asked about whether Facebook might eventually add tools to control certain types of data, like purchase history or location data, but Baser said the company found that “very few people understood the data enough” to want something like that.

“I agree with your instinct, but that’s not the feedback we got,” he said, adding that if there’s significant user demand, “Of course, we’d consider it.”

The Off-Facebook Activity tool is rolling out initially in Ireland, South Korea and Spain before expanding to additional countries.

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Instagram says growth hackers are behind spate of fake Stories views

If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.

Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.

TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.

A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)

Instagram told us it is aware of the issue and is working on a fix.

It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).

Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )

A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.

So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.

“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”

Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)

It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.

We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.

What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?

Switching your profile to private is the only way to thwart the growth hackers, for now.

Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.

When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”

Daily Crunch: Final Oculus co-founder departs Facebook

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook is losing its last Oculus co-founder

Nate Mitchell, the final Oculus co-founder remaining at Facebook, announced in an internal memo that he’s leaving the company and “taking time to travel, be with family, and recharge.” His role within the company has shifted several times since Oculus was acquired, but his current title is head of product management for virtual reality.

This follows the departures of former Oculus CEO Brendan Iribe and co-founder Palmer Luckey.

2. Twitter tests ways for users to follow and snooze specific topics

The company isn’t getting rid of the ability to follow other users, but it announced yesterday that it will start pushing users to start following topics as well, which will feature highly engaged tweets from a variety of accounts.

3. WeWork’s S-1 misses these three key points

WeWork just released its S-1 ahead of going public, but Danny Crichton argues we still don’t know the health of the core of the company’s business model or fully understand the risks it is undertaking. (Extra Crunch membership required.)

4. CBS and Viacom are merging into a combined company called ViacomCBS

The move is, in some ways, a concession to a turbulent media environment driving large-scale M&A, with AT&T buying Time Warner and Disney acquiring most of Fox — both deals are seen as consolidation in preparation for a streaming-centric future.

5. Nvidia breaks records in training and inference for real-time conversational AI

Nvidia’s GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records, with big implications for anyone building on their tech.

6. Corporate carpooling startup Scoop raises $60 million

Scoop, which launched back in 2015, is a corporate carpooling service that works with the likes of LinkedIn, Workday, T-Mobile and more than 50 other companies to help their employees get to and from work.

7. Domino’s launches e-bike delivery to compete with UberEats, DoorDash

Domino’s will start using custom electric bikes for pizza delivery through a partnership with Rad Power Bikes.

‘Private’ and ‘hidden’ mean different things to Facebook

Facebook’s leadership made a pretty heavy-handed indication this year that it believes Facebook Groups are the future of the app. They announced all of this alongside their odd declaration that “The future is private.” Now, Facebook is changing the language describing the visibility of privacy of groups.

As the Groups feature has come front-and-center in recent redesigns, Facebook has decided that the language they have been using to describe the visibility of “Public,” “Closed” and “Secret” Groups isn’t as clear as it should be, so the company is switching it up. Groups will now be labeled either “Public” or “Private.”

That means that groups that were previously “Closed” or “Secret” will now share the designation of “Private,” meaning that only members of the group can see who’s in the group or what has been posted. The distinction is that there’s now a second metric — whether or not the group is “Visible,” which denoted if the group can be found via search. For groups that were previously “Closed,” the migration to the classification will leave them “Visible” while “Secret” groups will remain “Hidden.”

Screen Shot 2019 08 14 at 8.48.45 AM

In a way, this is kind of just Facebook throwing more privacy-related labels in their app to change perceptions while the feature sets stay the same, but denoting the visibility of a “closed” group in search was probably the biggest point of confusion here that Facebook was aiming to rectify. There’s a clear editorial message with Facebook conveying that there are shades and nuances to what “Private” means on Facebook compared to “Public,” which is unwavering and defaulted.

The point of the previous labels was to make privacy settings easier to grasp with a single word. Facebook didn’t hit a home run with those labels, but it kind of feels like you really need to see this graphic to fully get the differentiations to Groups now, which probably isn’t the best sign.

New Facebook ad units can remind you when a movie comes out

Facebook is launching two new ad units designed to help movie studios promote their latest releases.

The first unit is called a movie reminder ad, and it does exactly that —since studios usually start marketing their titles months or even years before release, they can now include an Interested button in their Facebook ads, allowing users to opt-in to a notification when the film is released.  Then, on the Friday before opening weekend, interested moviegoers will get a reminder pointing them to a page with showtimes and ticket purchase options from Fandango and Atom Tickets.

Meanwhile, a showtime ad is designed for a later stage of a marketing campaign, when the movie is already in theaters. These ads feature a Get Showtimes button that will direct users to that same detail page with nearby showtimes and ticket purchase links.

In Facebook-commissioned research from Accenture published earlier this year, 58% of moviegoers said they discover new films online, and that 39% are doing so on smartphones and tablets.

Jen Howard, Facebook’s group director for entertainment and technology, told me that this should provide the Hollywood studios (who, aside from Disney, are having a rough summer) with a seamless way to connect their ads with movie ticket purchases. She also argued that it allows them to address “the full funnel” of viewer interest, and is “really starting to get them closer to a direct-to-consumer experience with moviegoers.”

Facebook says it’s already been testing the ad formats with select studios. For example, Universal Pictures used showtime ads to promote “The Grinch,” resulting in what Facebook said was “a significant increase in showtime lookups and ticket purchases.”

Movie reminder ads and showtime ads are now available to all studios in the United States and the United Kingdom.

Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe

Facebook’s lead privacy regulator in Europe is now asking the company for detailed information about the operation of a voice-to-text feature in Facebook’s Messenger app and how it complies with EU law.

Yesterday Bloomberg reported that Facebook uses human contractors to transcribe app users’ audio messages — yet its privacy policy makes no clear mention of the fact that actual people might listen to your recordings.

A page on Facebook’s help center also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.

A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Google, Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”

Bloomberg’s report follows similar revelations about AI assistant technologies offered by other tech giants, including Apple, Amazon, Google and Microsoft — which have also attracted attention from European privacy regulators in recent weeks.

What this tells us is that the hype around AI voice assistants is still glossing over a far less high tech backend. Even as lashings of machine learning marketing guff have been used to cloak the ‘mechanical turk’ components (i.e. humans) required for the tech to live up to the claims.

This is a very old story indeed. To wit: A full decade ago, a UK startup called Spinvox, which had claimed to have advanced voice recognition technology for converting voicemails to text messages, was reported to be leaning very heavily on call centers in South Africa and the Philippines… staffed by, yep, actual humans.

Returning to present day ‘cutting-edge’ tech, following Bloomberg’s report Facebook said it suspended human transcriptions earlier this month — joining Apple and Google in halting manual reviews of audio snippets for their respective voice AIs. (Amazon has since added an opt out to the Alexa app’s settings.)

We asked Facebook where in the Messenger app it had been informing users that human contractors might be used to transcribe their voice chats/audio messages; and how it collected Messenger users’ consent to this form of data processing — prior to suspending human reviews.

The company did not respond to our questions. Instead a spokesperson provided us with the following statement: “Much like Apple and Google, we paused human review of audio more than a week ago.”

Facebook also described the audio snippets that it sent to contractors as masked and de-identified; said they were only collected when users had opted in to transcription on Messenger; and were only used for improving the transcription performance of the AI.

It also reiterated a long-standing rebuttal by the company to user concerns about general eavesdropping by Facebook, saying it never listens to people’s microphones without device permission nor without explicit activation by users.

How Facebook gathers permission to process data is a key question, though.

The company has recently, for example, used a manipulative consent flow in order to nudge users in Europe to switch on facial recognition technology — rolling back its previous stance, adopted in response to earlier regulatory intervention, of switching the tech off across the bloc.

So a lot rests on how exactly Facebook has described the data processing at any point it is asking users to consent to their voice messages being reviewed by humans (assuming it’s relying on consent as its legal basis for processing this data).

Bundling consent into general T&Cs for using the product is also unlikely to be compliant under EU privacy law, given that the bloc’s General Data Protection Regulation requires consent to be purpose limited, as well as fully informed and freely given.

If Facebook is relying on legitimate interests to process Messenger users’ audio snippets in order to enhance its AI’s performance it would need to balance its own interests against any risk to people’s privacy.

Voice AIs are especially problematic in this respect because audio recordings may capture the personal data of non-users too — given that people in the vicinity of a device (or indeed a person on the other end of the phone line who’s leaving you a message) could have their personal data captured without ever having had the chance to consent to Facebook contractors getting to hear it.

Leaks of Google Assistant snippets to the Belgian press recently highlighted both the sensitive nature of recordings and the risk of reidentification posed by such recordings — with journalists able to identify some of the people in the recordings.

Multiple press reports have also suggested contractors employed by tech giants are routinely overhearing intimate details captured via a range of products that include the ability to record audio and stream this personal data to the cloud for processing.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

India’s Meesho raises $125M to expand its social commerce business

Meesho, a Bangalore-based social commerce startup, has raised $125 million in a new financing round to expand its business in the country and change the way millions shop online.

The Series D round was led by Naspers, and existing investors SAIF, Sequoia, Shunwei Capital, RPS and Venture Highway participating as well. Facebook also participated in the round, so did Arun Sarin, former CEO of Vodafone Group. The four-year-old startup has raised $190 million to date.

Meesho is an online marketplace that connects sellers with customers on social media platforms such as WhatsApp, Facebook, and Instagram. The startup claims to have a network of more than 2 million resellers from 700 towns who largely deal with apparel, home appliances and electronics items.

These resellers are mostly homemakers, most of whom have purchased a smartphone for the first time in recent years. Eighty percent of Meesho’s user base is female.

meesho android

Meesho said the startup will use the fresh capital to expand its reach in the nation and add as many as 18 million new sellers by end of next year. “The latest investment will also strengthen Meesho’s aim to grow its community of women entrepreneurs who have dreamt of running their own businesses but lacked the funds and expertise to do so,” the company said.

More than 90% of businesses in India are still offline and unorganized. Meesho is trying to get these businesses, most of whom don’t have working capital to enable their own online presence, sell online, Vidit Aatrey, Meesho co-founder and CEO, told TechCrunch in an interview.

“I am particularly proud that Meesho has cut across gender, education levels, risk appetites and vocations to create livelihoods for people with no investment of their own. Our social sellers are small retailers, women, students and retired citizens, with 70% being homemakers who have found financial freedom and a business identity without having to step outside their homes,” said Aatrey.

Meesho also plans to use the new funds to further bulk up its technology platform to accommodate new product lines.

“The phenomenal growth they are already experiencing shows that Meesho has hit a sweet spot in the market and is well-poised to serve the next 500 million online shoppers in the country,” said Ashutosh Sharma, Head of India Investments, Naspers Ventures, in a statement.