Tag Archives: Apps

Facebook really doesn’t want you to read these emails

Oh hey, y’all, it’s Friday! It’s August! Which means it’s a great day for Facebook to drop a little news it would prefer you don’t notice. News that you won’t find a link to on the homepage of Facebook’s Newsroom — which is replete with colorfully illustrated items it does want you to read (like the puffed up claim that “Now You Can See and Control the Data That Apps and Websites Share With Facebook”).

The blog post Facebook would really prefer you didn’t notice is tucked away in a News sub-section of this website — where it’s been confusingly entitled: Document Holds the Potential for Confusion. And has an unenticing grey image of a document icon to further put you off — just in case you happened to stumble on it after all. It’s almost as if Facebook is saying “definitely don’t click here“…

ca update grey

So what is Facebook trying to bury in the horse latitudes of summer?

An internal email chain, starting September 2015, which shows a glimpse of what Facebook’s own staff knew about the activity of Cambridge Analytica prior to The Guardian’s December 2015 scoop — when the newspaper broke the story that the controversial (and now defunct) data analytics firm, then working for Ted Cruz’s presidential campaign, had harvested data on millions of Facebook users without their knowledge and/or consent, and was using psychological insights gleaned from the data to target voters.

Facebook founder Mark Zuckerberg’s official timeline of events about what he knew when vis-à-vis the Cambridge Analytica story has always been that his knowledge of the matter dates to December 2015 — when the Guardian published its story.

But the email thread Facebook is now releasing shows internal concerns being raised almost two months earlier.

This chimes with previous (more partial) releases of internal correspondence pertaining to Cambridge Analytica  — which have also come out as a result of legal actions (and which we’ve reported on previously here and here).

If you click to download the latest release, which Facebook suggests it ‘agreed’ with the District of Columbia Attorney General to “jointly make public”, you’ll find a redacted thread of emails in which Facebook staffers raise a number of platform policy violation concerns related to the “political partner space”, writing September 29, 2015, that “many companies seem to be on the edge- possibly over”.

Cambridge Analytica is first identified by name — when it’s described by a Facebook employee as “a sketchy (to say the least) data modelling company that has penetrated our market deeply” — on September 22, 2015, per this email thread. It is one of many companies the staffer writes are suspected of scraping user data — but is also described as “the largest and most aggressive on the conservative side”.

Screenshot 2019 08 23 at 16.34.15

On September 30, 2015, a Facebook staffer responds to this, asking for App IDs and app names for the apps engaging in scraping user data — before writing: “My hunch is that these apps’ data-scraping is likely non-compliant”.

“It would be very difficult to engage in data-scraping activity as you described while still being compliant with FPPs [Facebook Platform Policies],” this person adds.

Cambridge Analytica gets another direct mention (“the Cambridge app”) on the same day. A different Facebook staffer then chips in with a view that “it’s very likely these companies are not in violation of any of our terms” — before asking for “concrete examples” and warning against calling them to ask questions unless “red flags” have been confirmed.

On October 13, a Facebook employee chips back into the thread with the view that “there are likely a few data policy violations here”.

The email thread goes on to discuss concerns related to additional political partners and agencies using Facebook’s platform at that point, including ForAmerica, Creative Response Concepts, NationBuilder and Strategic Media 21. Which perhaps explains Facebook’s lack of focus on CA — if potentially “sketchy” political activity was apparently widespread.

On December 11 another Facebook staffer writes to ask for an expedited review of Cambridge Analytica — saying it’s “unfortunately… now a PR issue”, i.e. as a result of the Guardian publishing its article.

The same day a Facebook employee emails to say Cambridge Analytica “is hi pri at this point”, adding: “We need to sort this out ASAP” — a month and a half after the initial concern was raised.

Also on December 11 a staffer writes that they had not heard of GSR, the Cambridge-based developer CA hired to extract Facebook user data, before the Guardian article named it. But other Facebook staffers chip in to reveal personal knowledge of the psychographic profiling techniques deployed by Cambridge Analytica and GSR’s Dr Aleksandr Kogan, with one writing that Kogan was their postdoc supervisor at Cambridge University.

Another says they are friends with Michal Kosinsky, the lead author of a personality modelling paper that underpins the technique used by CA to try to manipulate voters — which they described as “solid science”.

A different staffer also flags the possibility that Facebook has worked with Kogan — ironically enough “on research on the Protect & Care team” — citing the “Wait, What thread” and another email, neither of which appear to have been released by Facebook in this ‘Exhibit 1’ bundle.

So we can only speculate on whether Facebook’s decision — around September 2015 — to hire Kogan’s GSR co-founder, Joseph Chancellor, appears as a discussion item in the ‘Wait, What’ thread…

Putting its own spin on the release of these internal emails in a blog post, Facebook sticks to its prior line that “unconfirmed reports of scraping” and “policy violations by Aleksandr Kogan” are two separate issues, writing:

We believe this document has the potential to confuse two different events surrounding our knowledge of Cambridge Analytica. There is no substantively new information in this document and the issues have been previously reported. As we have said many times, including last week to a British parliamentary committee, these are two distinct issues. One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica. This document proves the issues are separate; conflating them has the potential to mislead people.

It has previously also referred to the internal concerns raised about CA as “rumors”.

“Facebook was not aware that Kogan sold data to Cambridge Analytica until December 2015. That is a fact that we have testified to under oath, that we have described to our core regulators, and that we stand by today,” it adds now.

It also claims that after an engineer responded to concerns that CA was scraping data and looked into it they were not able to find any such evidence. “Even if such a report had been confirmed, such incidents would not naturally indicate the scale of the misconduct that Kogan had engaged in,” Facebook adds.

The company has sought to dismiss the privacy litigation brought against it by the District of Columbia which is related to the Cambridge Analytica scandal — but has been unsuccessful in derailing the case thus far.

The DC complaint alleges that Facebook allowed third-party developers to access consumers’ personal data, including information on their online behavior, in order to offer apps on its platform, and that it failed to effectively oversee and enforce its platform policies by not taking reasonable steps to protect consumer data and privacy. It also alleges Facebook failed to inform users of the CA breach.

Facebook has also failed to block another similar lawsuit that’s been filed in Washington, DC by Attorney General Karl Racine — which has alleged lax oversight and misleading privacy standards.

Tumblr’s next step forward with Automattic CEO Matt Mullenweg

After months of rumors, Verizon finally sold off Tumblr for a reported $3 million — a fraction of what Yahoo paid for the once might blogging service back in 2013.

The media conglomerate (which also owns TechCrunch) was clearly never quite sure what to do with the property after gobbling it up as part of its 2016 Yahoo acquisition. All parties has since come to the conclusion that Tumblr simply wasn’t a good fit under either the Verizon or Yahoo umbrella, amounting to a $1.1 billion mistake.

For Tumblr, however, the story may still have a happy ending. By all accounts, its new home at Automattic is far better fit. The service joins a portfolio that includes popular blogging service WordPress.com, spam filtering service Akismet and long-form storytelling platform, Longreads.

In an interview, this week, Automattic founder and CEO Matt Mullenweg discussed Tumblr’s history and the impact of the poorly received adult content restrictions. He also shed some light on where Tumblr goes from here, including a potential increased focused on multimedia such as podcasting.

Brian Heater: I’m curious how [your meetings with Tumblr staff] went. What’s the feeling on the team right now? What are the concerns? How are people feeling about the transition?

Twitter picks up team from narrative app Lightwell in its latest effort to improve conversations

Twitter’s ongoing, long-term efforts to make conversations easier to follow and engage with on its platform is getting a boost with the company’s latest acquihire. The company has picked up the team behind Lightwell, a startup that had built a set of developer tools to build interactive, narrative apps, for an undisclosed sum. Lightwell’s founder and CEO, Suzanne Xie, is becoming a director of product leading Twitter’s Conversations initiative, with the rest of her small four-person team joining her on the conversations project.

(Sidenote: Sara Haider, who had been leading the charge on rethinking the design of Conversations on Twitter, most recently through the release of twttr, Twitter’s newish prototyping app, announced that she would be moving on to a new project at the company after a short break. I understand twttr will continue to be used to openly test conversation tweaks and other potential changes to how the app works. )

The Lightwell/Twitter news was announced late yesterday both by Lightwell itself and Twitter’s VP of product Keith Coleman. A Twitter spokesperson also confirmed the deal to TechCrunch in a short statement today: “We are excited to welcome Suzanne and her team to Twitter to help drive forward the important work we are doing to serve the public conversation,” he said. Interestingly, Twitter is on a product hiring push it seems. Other recent hires Coleman noted were Other recent product hires include Angela Wise and Tom Hauburger. Coincidentally, both joined from autonomous companies, respectively Waymo and Voyage.

To be clear, this is more acqui-hire than hire: only the Lightwell team (of what looks like three people) is joining Twitter. The Lightwell product will no longer be developed, but it is not going away, either. Xie noted in a separate Medium post that apps that have already been built (or plan to be built) on the platform will continue to work. It will also now be free to use.

Lightwell originally started life in 2012 as Hullabalu, as one of the many companies producing original-content interactive children’s stories for smartphones and tablets. In a sea of children-focused storybook apps, Hullabalu’s stories stood out not just because of the distinctive cast of characters that the startup had created, but for how the narratives were presented: part book, part interactive game, the stories engaged children and moved narratives along by getting the users to touch and drag elements across the screen.

hullabalu lightwell

After some years, Hullabalu saw an opportunity to package its technology and make it available as a platform for all developers, to be used not just by other creators of children’s content, but advertisers and more. It seems the company shifted at that time to make Lightwell its main focus.

The Hullabalu apps remained live on the App Store, even when the company moved on to focus on Lightwell. However, they hadn’t been updated in two years’ time. Xie says they will remain as is.

In its startup life, the company went through YCombinator, TechStars, and picked up some $6.5 million in funding (per Crunchbase), from investors that included Joanne Wilson, SV Angel, Vayner, Spark Labs, Great Oak, Scout Ventures and more.

If turning Hullabalu into Lightwell was a pivot, then the exit to Twitter can be considered yet another interesting shift in how talent and expertise optimized for one end can be repurposed to meet another.

One of Twitter’s biggest challenges over the years has been trying to create a way to make conversations (also narratives of a kind) easy to follow — both for those who are power users, and for those who are not and might otherwise easily be put off from using the product.

The crux of the problem has been that Twitter’s DNA is about real-time rivers of chatter that flow in one single feed, while conversations by their nature linger around a specific topic and become hard to follow when there are too many people talking. Trying to build a way to fit the two concepts together has foxed the company for a long time now.

At its best, bringing in a new team from the outside will potentially give Twitter a fresh perspective on how to approach conversations on the platform, and the fact that Lightwell has been thinking about creative ways to present narratives gives them some cred as a group that might come up completely new concepts for presenting conversations.

At a time when it seems that the conversation around Conversations had somewhat stagnated, it’s good to see a new chapter opening up.

YouTube is closing its private messages feature…and many kids are outraged

People love to share YouTube videos among their friends, which is why in mid-2017 YouTube launched a new in-app messaging feature that would allow YouTube users to private send their friends videos and chat within a dedicated tab in the YouTube mobile app. That feature is now being shut down, the company says. After September 18, the ability to direct message friends on YouTube itself will be removed.

The change was first spotted by 9to5Google, which noted that YouTube Messages came to the web in May of last year.

YouTube, in its announcement about the closure, doesn’t offer much insight into its decision.

While the company says that its more recent work has been focused on public conversations with updates to comments, posts, and Stories, it doesn’t explain why Messages is no longer a priority.

A likely reason, of course, is that the feature was under-utilized. Most people today are heavily invested in their own preferred messaging apps — whether that’s Messenger, WhatsApp, WeChat, iMessage or others.

Google, meanwhile, can’t seem to stop itself from building messaging apps and experiences. When YouTube Messages launched, Google was also invested in Allo (RIP), Duo, Hangouts, Meet, Google Voice, Android Messages/RCS, and was poised to transition users from Gchat (aka Google Talk) in Gmail to Hangouts Chat.

However, based on the nearly 500 angry comments replying to Google’s post about the closure, it seems that YouTube Messages may have been preferred by many young users.

Young…as in children.

 

Screen Shot 2019 08 21 at 9.39.38 AMScreen Shot 2019 08 21 at 9.39.23 AM

A sizable number of commenters are complaining that YouTube was the “only place” they could message their friends because they didn’t have a phone or weren’t allowed to give out their phone number.

Some said they used the feature to “talk to their mom” or because they weren’t allowed to use social media.

Screen Shot 2019 08 21 at 10.02.56 AMScreen Shot 2019 08 21 at 9.41.12 AM

It appears that many children had been using YouTube Messages as a sort of workaround to their parents’ block on messaging apps on their own phones, or as a way to communicate from their tablets or via web, likely without parents’ knowledge.

That’s not a good look for YouTube at this time, given its issues around inappropriate videos aimed at children, child exploitation, child predators, and regulatory issues.

The video platform in February came under fire for putting kids at risk of child predators. The company had to shut off comments on videos featuring minors, after the discovery of a pedophile ring that had been communicating via YouTube’s comments section.

Notably, the FTC is also now following up on complaints about YouTube’s possible violations of COPPA, a U.S. Children’s Privacy law. Child advocacy and consumers groups complain that YouTube has lured children under 13 into its digital playground, where it collects their data and targets them with ads, without parental consent.

Though some people may have used YouTube Messages to promote their channel or to share videos with family members and friends, it’s clear this usage hadn’t gone mainstream. Otherwise, YouTube wouldn’t be walking away from a popular product.

The feature also had issues with spam — much like Google+ did —  as there were unwelcome requests from strangers, at times.

YouTube says users will still be able to share videos through the “Share” feature which connects to other social networks.

The company declined to comment beyond what it shared on the forum post.

‘This is Your Life in Silicon Valley’: The League founder and CEO Amanda Bradford on modern dating, and whether Bumble is a ‘real’ startup

Welcome to this week’s transcribed edition of This is Your Life in Silicon Valley. We’re running an experiment for Extra Crunch members that puts This is Your Life in Silicon Valley in words – so you can read from wherever you are.

This is your Life in Silicon Valley was originally started by Sunil Rajaraman and Jascha Kaykas-Wolff in 2018. Rajaraman is a serial entrepreneur and writer (Co-Founded Scripted.com, and is currently an EIR at Foundation Capital), Kaykas-Wolff is the current CMO at Mozilla and ran marketing at BitTorrent.

Rajaraman and Kaykas-Wolff started the podcast after a series of blog posts that Sunil wrote for The Bold Italic went viral. The goal of the podcast is to cover issues at the intersection of technology and culture – sharing a different perspective of life in the Bay Area. Their guests include entrepreneurs like Sam Lessin, journalists like Kara Swisher and Mike Isaac, politicians like Mayor Libby Schaaf and local business owners like David White of Flour + Water.

This week’s edition of This is Your Life in Silicon Valley features Amanda Bradford – Founder/CEO of The League. Amanda talks about modern dating, its limitations, its flaws, why ‘The League’ will win. Amanda provides her candid perspective on other dating startups in a can’t-miss portion of the podcast.

Amanda talks about her days at Salesforce and how it influenced her decision to build a dating tech product that focused on data, and funnels. Amanda walks through her own process of finding her current boyfriend on ‘The League’ and how it came down to meeting more people. And that the flaw with most online dating is that people do not meet enough people due to filter bubbles, and lack of open criteria.

Amanda goes in on all of the popular dating sites, including Bumble and others, providing her take on what’s wrong with them. She even dishes on Raya and Tinder – sharing what she believes are how they should be perceived by prospective daters. The fast-response portion of this podcast where we ask Amanda about the various dating sites really raised some eyebrows and got some attention.

We ask Amanda about the incentives of online dating sites, and how in a way they are created to keep members online as long as possible. Amanda provides her perspective on how she addresses this inherent conflict at The League, and how many marriages have been shared among League members to date.

We ask Amanda about AR/VR dating and what the future will look like. Will people actually meet in person in the future? Will it be more like online worlds where we wear headsets and don’t actually interact face to face anymore? The answers may surprise you. We learn how this influences The League’s product roadmap.

The podcast eventually goes into dating stories from audience members – including some pretty wild online dating stories from people who are not as they seem. We picked two audience members at random to talk about their entertaining online dating stories and where they led. The second story really raised eyebrows and got into the notion that people go at great lengths to hide their real identities.

Ultimately, we get at the heart of what online dating is, and what the future holds for it.   If you care about the future of relationships, online dating, data, and what it all means this episode is for you.

For access to the full transcription, become a member of Extra Crunch. Learn more and try it for free. 

Sunil Rajaraman: I just want to check, are we recording? Because that’s the most important question. We’re recording, so this is actually a podcast and not just three people talking randomly into microphones.

I’m Sunil Rajaraman, I’m co-host of this podcast, This is Your Life in Silicon Valley, and Jascha Kaykas-Wolff is my co-host, we’ve been doing this for about a year now, we’ve done 30 shows, and we’re pleased today to welcome a very special guest, Jascha.

Jascha Kaykas-Wolff: Amanda.

Amanda Bradford: Hello everyone.

GettyImages 981543806

Amanda Bradford. (Photo by Astrid Stawiarz/Getty Images)

Kaykas-Wolff: We’re just going to stare at you and make it uncomfortable.

Bradford: Like Madonna.

Kaykas-Wolff: Yeah, so the kind of backstory and what’s important for everybody that’s in the audience to know is that this podcast is not a pitch for a product, it’s not about a company, it’s about the Bay Area. And the Bay Area is kind of special, but it’s also a little bit fucked up. I think we all kind of understand that, being here.

So what we want to do in the podcast is talk to people who have a very special, unique relationship with the Bay Area, no matter creators that are company builders, that are awesome entrepreneurs, that are just really cool and interesting people, and today we are really, really lucky to have an absolutely amazing entrepreneur, and also pretty heavy hitter in the technology scene. In a very specific and very special category of technology that Sunil really, really likes. The world of dating.

Rajaraman: Yeah, so it’s funny, the backstory to this is, Jascha have both been married, what, long time-

Kaykas-Wolff: Long time.

Rajaraman: And we have this weird fascination with online dating because we see a lot of people going through it, and it’s a baffling world, and so I want to demystify it a bit with Amanda Bradford today, the founder CEO of The League.

Bradford: You guys are like all of the married people looking at the single people in the petri dishes.

Rajaraman: So, I’ve done the thing where we went through it with the single friends who have the app, swiping through on their behalf, so it’s sort of like a weird thing.

Bradford: I know, we’re like a different species, aren’t we?

Twitter to test a new filter for spam and abuse in the Direct Message inbox

Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send your private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.

This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.

Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.

And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.

In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.

The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.

It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.

It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn the filter off.

Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.

The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.

Twitter is updating is Direct Message system in other ways, too.

At a press conference this week, Twitter announced several changes coming to its platform including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

Twitter leads $100M round in top Indian regional social media platform ShareChat

Is there room for another social media platform? ShareChat, a four-year-old social network in India that serves tens of million of people in regional languages, just answered that question with a $100 million financing round led by global giant Twitter .

Other than Twitter, TrustBridge Partners, and existing investors Shunwei Capital, Lightspeed Venture Partners, SAIF Capital, India Quotient and Morningside Venture Capital also participated in the Series D round of ShareChat.

The new round, which pushes ShareChat’s all-time raise to $224 million, valued the firm at about $650 million, a person familiar with the matter told TechCrunch. ShareChat declined to comment on the valuation.

sharechat screenshot

Screenshot of Sharechat home page on web

“Twitter and ShareChat are aligned on the broader purpose of serving the public conversation, helping the world learn faster and solve common challenges. This investment will help ShareChat grow and provide the company’s management team access to Twitter’s executives as thought partners,” said Manish Maheshwari, managing director of Twitter India, in a prepared statement.

Twitter, like many other Silicon Valley firms, counts India as one of its key markets. And like Twitter, other Silicon Valley firms are also increasingly investing in Indian startups.

ShareChat serves 60 million users each month in 15 regional languages, Ankush Sachdeva, co-founder and CEO of the firm, told TechCrunch in an interview. The platform currently does not support English, and has no plans to change that, Sachdeva said.

That choice is what has driven users to ShareChat, he explained. The early incarnation of the social media platform supported English language. It saw most of its users choose English as their preferred language, but this also led to another interesting development: Their engagement with the app significantly reduced.

The origin story

“For some reason, everyone wanted to converse in English. There was an inherent bias to pick English even when they did not know it.” (Only about 10% of India’s 1.3 billion people speak English. Hindi, a regional language, on the other hand, is spoken by about half a billion people, according to official government figures.)

So ShareChat pulled support for English. Today, an average user spends 22 minutes on the app each day, Sachdeva said. The learning in the early days to remove English is just one of the many things that has shaped ShareChat to what it is today and led to its growth.

In 2014, Sachdeva and two of his friends — Bhanu Singh and Farid Ahsan, all of whom met at the prestigious institute IIT Kanpur — got the idea of building a debate platform by looking at the kind of discussions people were having on Facebook groups.

They identified that cricket and movie stars were popular conversation topics, so they created WhatsApp groups and aggressively posted links to those groups on Facebook to attract users.

It was then when they built chatbots to allow users to discover different genres of jokes, recommendations for phones and food recipes, among other things. But they soon realized that users weren’t interested in most of such offerings.

“Nobody cared about our smartphone recommendations. All they wanted was to download wallpapers, ringtones, copy jokes and move on. They just wanted content.”

sharechat team

So in 2015, Sachdeva and company moved on from chatbots and created an app where users can easily produce, discover and share content in the languages they understand. (Today, user generated content is one of the key attractions of the platform, with about 15% of its user base actively producing content.)

A year later, ShareChat, like tens of thousands of other businesses, was in for a pleasant surprise. India’s richest man, Mukesh Ambani, launched his new telecom network Reliance Jio, which offered users access to the bulk of data at little to no charge for an extended period of time.

This immediately changed the way millions of people in the country, who once cared about each megabyte they consumed online, interacted with the internet. On ShareChat people quickly started to move from sharing jokes and other messages in text format to images and then videos.

Path ahead and monetization

That momentum continues to today. ShareChat now plans to give users more incentive — including money — and tools to produce content on the platform to drive engagement. “There remains a huge hunger for content in vernacular languages,” Sachdeva said.

Speaking of money, ShareChat has experimented with ads on the app and its site, but revenue generation isn’t currently its primary focus, Sachdeva said. “We’re in the Series D now so there is obviously an obligation we have to our investors to make money. But we all believe that we need to focus on growth at this stage,” he said.

ShareChat also has many users in Bangladesh, Nepal and the Middle East, where many users speak Indian regional languages. But the startup currently plans to focus largely on expanding its user base in India.

It will use the new capital to strengthen the technology infrastructure and hire more tech talent. Sachdeva said ShareChat is looking to open an office in San Francisco to hire local engineers there.

A handful of local and global giants have emerged in India in recent years to cater to people in small cities and villages, who are just getting online. Pratilipi, a storytelling platform has amassed more than 5 million users, for instance. It recently raised $15 million to expand its user base and help users strike deals with content studios.

Perhaps no other app poses a bigger challenge to ShareChat than TikTok, an app where users share short-form videos. TikTok, owned by one of the world’s most valued startups, has over 120 million users in India and sees content in many Indian languages.

But the app — with its ever growing ambitions — also tends to land itself in hot water in India every few weeks. In all sensitive corners of the country. On that front, ShareChat has an advantage. Over the years, it has emerged as an outlier in the country that has strongly supported proposed laws by the Indian government that seek to make social apps more accountable for content that circulates on their platforms.

Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe

Facebook’s lead privacy regulator in Europe is now asking the company for detailed information about the operation of a voice-to-text feature in Facebook’s Messenger app and how it complies with EU law.

Yesterday Bloomberg reported that Facebook uses human contractors to transcribe app users’ audio messages — yet its privacy policy makes no clear mention of the fact that actual people might listen to your recordings.

A page on Facebook’s help center also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.

A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Google, Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”

Bloomberg’s report follows similar revelations about AI assistant technologies offered by other tech giants, including Apple, Amazon, Google and Microsoft — which have also attracted attention from European privacy regulators in recent weeks.

What this tells us is that the hype around AI voice assistants is still glossing over a far less high tech backend. Even as lashings of machine learning marketing guff have been used to cloak the ‘mechanical turk’ components (i.e. humans) required for the tech to live up to the claims.

This is a very old story indeed. To wit: A full decade ago, a UK startup called Spinvox, which had claimed to have advanced voice recognition technology for converting voicemails to text messages, was reported to be leaning very heavily on call centers in South Africa and the Philippines… staffed by, yep, actual humans.

Returning to present day ‘cutting-edge’ tech, following Bloomberg’s report Facebook said it suspended human transcriptions earlier this month — joining Apple and Google in halting manual reviews of audio snippets for their respective voice AIs. (Amazon has since added an opt out to the Alexa app’s settings.)

We asked Facebook where in the Messenger app it had been informing users that human contractors might be used to transcribe their voice chats/audio messages; and how it collected Messenger users’ consent to this form of data processing — prior to suspending human reviews.

The company did not respond to our questions. Instead a spokesperson provided us with the following statement: “Much like Apple and Google, we paused human review of audio more than a week ago.”

Facebook also described the audio snippets that it sent to contractors as masked and de-identified; said they were only collected when users had opted in to transcription on Messenger; and were only used for improving the transcription performance of the AI.

It also reiterated a long-standing rebuttal by the company to user concerns about general eavesdropping by Facebook, saying it never listens to people’s microphones without device permission nor without explicit activation by users.

How Facebook gathers permission to process data is a key question, though.

The company has recently, for example, used a manipulative consent flow in order to nudge users in Europe to switch on facial recognition technology — rolling back its previous stance, adopted in response to earlier regulatory intervention, of switching the tech off across the bloc.

So a lot rests on how exactly Facebook has described the data processing at any point it is asking users to consent to their voice messages being reviewed by humans (assuming it’s relying on consent as its legal basis for processing this data).

Bundling consent into general T&Cs for using the product is also unlikely to be compliant under EU privacy law, given that the bloc’s General Data Protection Regulation requires consent to be purpose limited, as well as fully informed and freely given.

If Facebook is relying on legitimate interests to process Messenger users’ audio snippets in order to enhance its AI’s performance it would need to balance its own interests against any risk to people’s privacy.

Voice AIs are especially problematic in this respect because audio recordings may capture the personal data of non-users too — given that people in the vicinity of a device (or indeed a person on the other end of the phone line who’s leaving you a message) could have their personal data captured without ever having had the chance to consent to Facebook contractors getting to hear it.

Leaks of Google Assistant snippets to the Belgian press recently highlighted both the sensitive nature of recordings and the risk of reidentification posed by such recordings — with journalists able to identify some of the people in the recordings.

Multiple press reports have also suggested contractors employed by tech giants are routinely overhearing intimate details captured via a range of products that include the ability to record audio and stream this personal data to the cloud for processing.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Twitter’s latest test lets users subscribe to a tweet’s replies

Twitter in more recent months has been focused on making conversations on its platform easier to follow, participate in, and in some cases, block. The company’s latest test, announced via a tweet ahead of the weekend, will allow users to subscribe to replies to a particularly interesting tweet they want to follow, too, in order to see how the conversation progresses. The feature is designed to complement the existing notifications feature you may have turned on for your “must-follow” accounts.

Many people already have Twitter alert them via a push notification when an account they want to track sends out a new tweet. Now you’ll be able to visit that tweet directly and turn on the option to receive reply notifications, if you’re opted in to this new test.

If you have the new feature, you’ll see a notification bell icon in the top-right corner of the screen when you’re viewing the tweet in Twitter’s mobile app.

When you click the bell icon, you’ll be presented with three options: one to subscribe to the “top” replies, another to subscribe to all replies, and a third to turn reply notifications off.

Twitter says top replies will include those from the author, anyone they mentioned, and people you follow.

This is the same set of “interesting” replies that Twitter has previously experimented with highlighting in other ways — including through the use of labels like “Original Tweeter” or “Author,” and as of last month, with icons instead of text-based labels. For example, one test displayed a microphone icon next to a tweet from the original poster in order to make their replies easier to spot.

The larger goal of those tests and this new one is to personalize the experience of participating in Twitter conversations by showcasing what the people you follow are saying, while also making a conversation easier to follow by seeing when the original poster and those they mentioned have chimed in.

This latest test takes things a step further by actually subscribing you to those sorts of replies — or even all the replies to a tweet, if you choose.

The new experiment comes at a time when Twitter is attempting to solve the overwhelming problem of conversation health in other ways, too. Beyond attempting to write and enforce tougher rules regarding online abuse and harassment, it also last month officially launched a “Hide Replies” feature in Canada that would allow the original poster to put replies they didn’t feel were valuable behind an icon so they weren’t prominently displayed within the conversation. It’s unclear how “Hide Replies” would work with this new reply notifications option, however — presumably, you’d still get alerts when someone you follow responded, even if the original poster hid their reply from view.

Twitter says the new test is available on iOS or Android.