US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

What will Tumblr become under the ownership of tech’s only Goldilocks founder?

This week, Automattic revealed it has signed all the paperwork to acquire Tumblr from Verizon, including its full staff of 200. Tumblr has undergone quite a journey since its headline-grabbing acquisition by Marissa Mayer’s Yahoo in 2013 for $1.1 billion, but after six years of neglect, its latest move is its first real start since it stopped being an independent company. Now, it’s in the hands of Matt Mullenweg, the only founder of a major tech company who has repeatedly demonstrated a talent for measured responses, moderation and a willingness to forego reckless explosive growth in favor of getting things ‘just right.’

There’s never been a better acquisition for all parties involved, or at least one in which every party should walk away feeling they got exactly what they needed out of the deal. Yes, that’s in spite of the reported $3 million-ish asking price.

Verizon Media acquired Tumblr through a deal made to buy Yahoo, under a previous media unit strategy and leadership team. Verizon Media has no stake in the company, and so headlines talking about the bath it apparently took relative to the original $1.1 billion acquisition price are either willfully ignorant or just plain dumb.

Six years after another company made that bad deal for a company it clearly didn’t have the right business focus to correctly operate, Verizon made a good one to recoup some money.

Aligned leadership and complementary offerings drive a win-win

Instagram says growth hackers are behind spate of fake Stories views

If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.

Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.

TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.

A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)

Instagram told us it is aware of the issue and is working on a fix.

It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).

Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )

A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.

So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.

“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”

Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)

It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.

We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.

What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?

Switching your profile to private is the only way to thwart the growth hackers, for now.

Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.

When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”

Daily Crunch: Final Oculus co-founder departs Facebook

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook is losing its last Oculus co-founder

Nate Mitchell, the final Oculus co-founder remaining at Facebook, announced in an internal memo that he’s leaving the company and “taking time to travel, be with family, and recharge.” His role within the company has shifted several times since Oculus was acquired, but his current title is head of product management for virtual reality.

This follows the departures of former Oculus CEO Brendan Iribe and co-founder Palmer Luckey.

2. Twitter tests ways for users to follow and snooze specific topics

The company isn’t getting rid of the ability to follow other users, but it announced yesterday that it will start pushing users to start following topics as well, which will feature highly engaged tweets from a variety of accounts.

3. WeWork’s S-1 misses these three key points

WeWork just released its S-1 ahead of going public, but Danny Crichton argues we still don’t know the health of the core of the company’s business model or fully understand the risks it is undertaking. (Extra Crunch membership required.)

4. CBS and Viacom are merging into a combined company called ViacomCBS

The move is, in some ways, a concession to a turbulent media environment driving large-scale M&A, with AT&T buying Time Warner and Disney acquiring most of Fox — both deals are seen as consolidation in preparation for a streaming-centric future.

5. Nvidia breaks records in training and inference for real-time conversational AI

Nvidia’s GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records, with big implications for anyone building on their tech.

6. Corporate carpooling startup Scoop raises $60 million

Scoop, which launched back in 2015, is a corporate carpooling service that works with the likes of LinkedIn, Workday, T-Mobile and more than 50 other companies to help their employees get to and from work.

7. Domino’s launches e-bike delivery to compete with UberEats, DoorDash

Domino’s will start using custom electric bikes for pizza delivery through a partnership with Rad Power Bikes.

‘Private’ and ‘hidden’ mean different things to Facebook

Facebook’s leadership made a pretty heavy-handed indication this year that it believes Facebook Groups are the future of the app. They announced all of this alongside their odd declaration that “The future is private.” Now, Facebook is changing the language describing the visibility of privacy of groups.

As the Groups feature has come front-and-center in recent redesigns, Facebook has decided that the language they have been using to describe the visibility of “Public,” “Closed” and “Secret” Groups isn’t as clear as it should be, so the company is switching it up. Groups will now be labeled either “Public” or “Private.”

That means that groups that were previously “Closed” or “Secret” will now share the designation of “Private,” meaning that only members of the group can see who’s in the group or what has been posted. The distinction is that there’s now a second metric — whether or not the group is “Visible,” which denoted if the group can be found via search. For groups that were previously “Closed,” the migration to the classification will leave them “Visible” while “Secret” groups will remain “Hidden.”

Screen Shot 2019 08 14 at 8.48.45 AM

In a way, this is kind of just Facebook throwing more privacy-related labels in their app to change perceptions while the feature sets stay the same, but denoting the visibility of a “closed” group in search was probably the biggest point of confusion here that Facebook was aiming to rectify. There’s a clear editorial message with Facebook conveying that there are shades and nuances to what “Private” means on Facebook compared to “Public,” which is unwavering and defaulted.

The point of the previous labels was to make privacy settings easier to grasp with a single word. Facebook didn’t hit a home run with those labels, but it kind of feels like you really need to see this graphic to fully get the differentiations to Groups now, which probably isn’t the best sign.

New Facebook ad units can remind you when a movie comes out

Facebook is launching two new ad units designed to help movie studios promote their latest releases.

The first unit is called a movie reminder ad, and it does exactly that —since studios usually start marketing their titles months or even years before release, they can now include an Interested button in their Facebook ads, allowing users to opt-in to a notification when the film is released.  Then, on the Friday before opening weekend, interested moviegoers will get a reminder pointing them to a page with showtimes and ticket purchase options from Fandango and Atom Tickets.

Meanwhile, a showtime ad is designed for a later stage of a marketing campaign, when the movie is already in theaters. These ads feature a Get Showtimes button that will direct users to that same detail page with nearby showtimes and ticket purchase links.

In Facebook-commissioned research from Accenture published earlier this year, 58% of moviegoers said they discover new films online, and that 39% are doing so on smartphones and tablets.

Jen Howard, Facebook’s group director for entertainment and technology, told me that this should provide the Hollywood studios (who, aside from Disney, are having a rough summer) with a seamless way to connect their ads with movie ticket purchases. She also argued that it allows them to address “the full funnel” of viewer interest, and is “really starting to get them closer to a direct-to-consumer experience with moviegoers.”

Facebook says it’s already been testing the ad formats with select studios. For example, Universal Pictures used showtime ads to promote “The Grinch,” resulting in what Facebook said was “a significant increase in showtime lookups and ticket purchases.”

Movie reminder ads and showtime ads are now available to all studios in the United States and the United Kingdom.

Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe

Facebook’s lead privacy regulator in Europe is now asking the company for detailed information about the operation of a voice-to-text feature in Facebook’s Messenger app and how it complies with EU law.

Yesterday Bloomberg reported that Facebook uses human contractors to transcribe app users’ audio messages — yet its privacy policy makes no clear mention of the fact that actual people might listen to your recordings.

A page on Facebook’s help center also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.

A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Google, Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”

Bloomberg’s report follows similar revelations about AI assistant technologies offered by other tech giants, including Apple, Amazon, Google and Microsoft — which have also attracted attention from European privacy regulators in recent weeks.

What this tells us is that the hype around AI voice assistants is still glossing over a far less high tech backend. Even as lashings of machine learning marketing guff have been used to cloak the ‘mechanical turk’ components (i.e. humans) required for the tech to live up to the claims.

This is a very old story indeed. To wit: A full decade ago, a UK startup called Spinvox, which had claimed to have advanced voice recognition technology for converting voicemails to text messages, was reported to be leaning very heavily on call centers in South Africa and the Philippines… staffed by, yep, actual humans.

Returning to present day ‘cutting-edge’ tech, following Bloomberg’s report Facebook said it suspended human transcriptions earlier this month — joining Apple and Google in halting manual reviews of audio snippets for their respective voice AIs. (Amazon has since added an opt out to the Alexa app’s settings.)

We asked Facebook where in the Messenger app it had been informing users that human contractors might be used to transcribe their voice chats/audio messages; and how it collected Messenger users’ consent to this form of data processing — prior to suspending human reviews.

The company did not respond to our questions. Instead a spokesperson provided us with the following statement: “Much like Apple and Google, we paused human review of audio more than a week ago.”

Facebook also described the audio snippets that it sent to contractors as masked and de-identified; said they were only collected when users had opted in to transcription on Messenger; and were only used for improving the transcription performance of the AI.

It also reiterated a long-standing rebuttal by the company to user concerns about general eavesdropping by Facebook, saying it never listens to people’s microphones without device permission nor without explicit activation by users.

How Facebook gathers permission to process data is a key question, though.

The company has recently, for example, used a manipulative consent flow in order to nudge users in Europe to switch on facial recognition technology — rolling back its previous stance, adopted in response to earlier regulatory intervention, of switching the tech off across the bloc.

So a lot rests on how exactly Facebook has described the data processing at any point it is asking users to consent to their voice messages being reviewed by humans (assuming it’s relying on consent as its legal basis for processing this data).

Bundling consent into general T&Cs for using the product is also unlikely to be compliant under EU privacy law, given that the bloc’s General Data Protection Regulation requires consent to be purpose limited, as well as fully informed and freely given.

If Facebook is relying on legitimate interests to process Messenger users’ audio snippets in order to enhance its AI’s performance it would need to balance its own interests against any risk to people’s privacy.

Voice AIs are especially problematic in this respect because audio recordings may capture the personal data of non-users too — given that people in the vicinity of a device (or indeed a person on the other end of the phone line who’s leaving you a message) could have their personal data captured without ever having had the chance to consent to Facebook contractors getting to hear it.

Leaks of Google Assistant snippets to the Belgian press recently highlighted both the sensitive nature of recordings and the risk of reidentification posed by such recordings — with journalists able to identify some of the people in the recordings.

Multiple press reports have also suggested contractors employed by tech giants are routinely overhearing intimate details captured via a range of products that include the ability to record audio and stream this personal data to the cloud for processing.

Twitter exec says edit button isn’t ‘anywhere near the top of our priorities’

At a press event in San Francisco, Twitter Product Lead Kayvon Beykpour talked about a number of product changes coming to the company’s service; he also addressed the oft-memed user request for an edit button. Long story, short, you shouldn’t expect to see the button anytime soon.

“Honestly, it’s a feature that I think we should build at some point, but it’s not anywhere near the top of our priorities,” Beykpour said. “That’s the honest answer.”

The executive said that there were some obvious risk factors but that he felt the company would eventually be able to build a feature to address user needs like correcting a typo or clarifying what they meant to say.

Twitter announced earlier in the event that the company is testing the ability to let users follow topics the same way they would ordinarily follow accounts.

Twitter tests ways for users to follow and snooze specific topics

You may soon be able to organize Twitter’s web of hashtags and handles in a smarter way — that is, if the company can pull off its ambitious new rethinking of the app’s timelines.

The company isn’t getting rid of the process of following users, but at a press event in SF, company execs announced they are planning to push users to start following “topics” that bring in well-engaged tweets from a variety of accounts that the user might not necessarily follow. Twitter is currently testing the feature on Android with topics focused around sports, “from MMA to Formula 1” to specific professional franchises.

The company plans to greatly expand the scope of these topics so that fans will be able to have timelines devoted to BTS and skincare routines. The feature is focused on helping users find new accounts and communities into which they can dive deeper.

The company is curating the overall topics manually, but Twitter will be relying on machine learning to intelligently populate the topics themselves so that the tweets can stay up to date. The company is also testing the ability to not only follow topics in your central timeline, but create your own secondary timelines into which you can bring multiple topics, accounts and hashtags.

A feature that Twitter says it is also starting to experiment with is the ability to temporarily unfollow a topic so you can keep certain tweets out of your timeline, like tweets chronicling an ongoing finale of a TV show or a football game. You can currently mute specific words and accounts indefinitely or for a finite amount of time.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.