Twitter ‘fesses up to more adtech leaks

Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.

Back in May the social network disclosed a bug that in certain conditions resulted in an account’s location data being shared with a Twitter ad partner, during real-time bidding (RTB) auctions.

In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.

It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.

The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.

It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.

Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…

Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.

The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.

Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.

This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.

This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.

These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.

“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.

“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.

(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)

In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:

We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.

The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.

“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.

“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”

While the company may “believe” there is nothing Twitter users can do — aside from accept its apology for screwing up — European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.

Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.

The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.

While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.

So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.

In a 130-page court filing, Kik claims the SEC’s lawsuit ‘twists’ the facts about its online token

CEO Ted Livingston of Kik

Kik Interactive has hit back at the Securities and Exchange Commission lawsuit that claims a $100 million token sale was illegal. The company, which owns Kik Messenger, filed a 130-page response today in U.S. District Court for the Southern District of New York, alleging that the SEC is “twisting” the facts about its token, called Kin, and asking for an early trial date and dismissal of the complaint.

One of the key issues in the case is if Kin was just an in-app token used to buy games, digital products and other services in Kik Messenger, or if it was meant to be an investment opportunity, as the SEC alleges.

Kik’s general counsel Eileen Lyon said in a press statement that “since Kin is not itself a security, the SEC must show that it was sold in a way that violates the securities laws. The SEC had access to over 50,000 documents and took testimony from nearly 20 witnesses prior to filing its Complaint, yet it is unable to make the case that Kik’s token sale violated the securities laws without bending the facts to distort the record.”

The SEC alleges that the token sale, announced in 2017, came at a time when the company had predicted that it would run out of money after Kik Messenger had been losing money for years, and that it then used proceeds from that sale to build an online marketplace for the app.

In the filing, Kik’s legal team denied that charge, claiming that the SEC’s allegations about its financial condition “is solely designed for misdirection, thereby prejudicing Kik and portraying it in a negative light” and that Kik began working on a cryptocurrency-based model after exploring monetization options that would help it compete against larger tech companies.

They added that “Kik’s Board and Executive Team alike believed that Kin was a bold idea that could solve the monetization challenges faced by all developers (not just Kik) in the existing advertising-based economy, by changing the way people buy and sell digital products and services.”

The SEC also alleges that the sale of digital tokens to U.S. investors was illegal because Kik did not register their offer as required by United States law, even though it claims that Kik marketed Kin as an investment opportunity whose value would increase. In its response, Kik denied that it offered or sold securities, or violated federal securities laws.

In the company’s press statement, Kik CEO Ted Livingston said, “The SEC tries to paint a picture that the Kin project was an act of desperation rather than the bold move that it was to win the game, and one that Kakao, Line, Telegram and Facebook have all now followed.”

Snap looks to raise $1 billion in private debt offering

Snap, the parent company of Snapchat, is looking to add some cash to its coffers via a new proposed private offering of $1 billion in convertible senior notes, with a due date for maturation of August 1, 2026. The debt offering will be used to cover the cost of general operating expenditures involved in running the business, Snap says, but also potentially to “acquire complementary businesses, products, services or technologies,” as well as possibly for future stock repurchase plans, though no such plans exist currently.

Raising debt to fund operations and acquisitions is not unusual for a publicly traded company – Netflix does this regularly to pick up more money to fund its increasingly expensive production budget for content, for instance. So far, the market seems to be reacting negatively to the news of Snap’s decision to seek this chunk of debt funding, however, as it’s down in pre-market trading.

Snap has generally been on a positive path in terms of its relationship with stockholders, however – its stock price rose on the back of a strong quarterly earnings report at the end of July, closing above its IPO price for the first time. It’s now dipped south of that mark again, but it’s still much-improved on a year-to-date timeline measure.

Squad, the ‘anti-bro startup,’ is creating a safe space for teenage girls online

When we go online to communicate, hang out or play, we’re typically logging on to platforms conceived of and built by men.

Mark Zuckerberg famously created Facebook in his Harvard dorm room. Evan Spiegel and his frat brother Bobby Murphy devised a plan for the ephemeral messaging app Snapchat while the pair were still students at Stanford. Working out of a co-working space, Kevin Systrom and Mike Krieger built Instagram and yes, they also went to Stanford.

Seldom have social tools created by women climbed the latter to mainstream success. Instead, women and girls have battled the lion’s share of digital harassment on popular social platforms — most of which failed early-on to incorporate security features tailored to minority user’s needs — and struggled to find a protected corner of the internet.

Squad, an app that allows you to video chat and share your phone screen with a friend in real-time, has tapped into a demographic clamoring for a safe space to gather online. Without any marketing, the startup has collected 450,000 registered users in eight months, 70% of which are teenage girls. So far this year, users have clocked in 1 million hours inside Squad calls.

“Completely accidentally we’ve developed this global audience of users and it’s girls all over the world,” Squad co-founder and chief executive officer Esther Crawford tells TechCrunch. “In India, it’s girls. In Saudia Arabia, it’s girls. In the U.S., it’s girls. Even without us localizing it, girls all over the world are finding it.”

Squad screens

Squad, the social screen sharing and group video chat app, has pulled together a $5 million investment led by First Round Capital.

Learn from the best but get rid of the shit

A remote team of six people led by Crawford, who’s a graduate of Oregon State University, Squad’s compelling founding story and organic growth helped them close a $5 million seed round led by First Round Capital general partner Hayley Barna, the only female partner at the historically all-male early-stage investment fund known for being the first institutional check in Uber.

Betaworks, Alpha Bridge Ventures, Day One Ventures, Jane VC, Mighty Networks CEO Gina Bianchini, early Snapchat employee Sebastian Gil and Y Combinator, the startup accelerator program Squad completed in the winter of 2018, have also participated in the funding round.

“We want to be a place where girls can come and hang out,” -Squad co-founder and CEO Esther Crawford.

Crawford describes Squad, which she’s built alongside her co-founder and chief technology officer Ethan Sutin, as the “anti-bro startup.” Not only because it’s led by a woman and boasts a cap table that’s 30% women and 30% people of color, but because she’s completely rewriting the consumer social startup playbook.

“We are trying to learn from the best in what they did but get rid of the shit,” Crawford said, referring to Snap, WhatsApp, Twitch and others. Twitch, a live-streaming platform for gamers, has become a social gathering place for Gen Z, she explains, but like many other communities on the internet, it’s failed its female users.

“Girls have been completely pushed off of Twitch,” she said. “The Twitch community didn’t want them there and they weren’t friendly to them. For boys, there are places you can go to consume content with other people, like Fortnite, but for girls there hasn’t been a place that’s really broken out. We want to be a place where girls can come and hang out.”

What Crawford and the small team at Squad have realized is that you don’t have to sacrifice growth for user safety and comfort. From the beginning, Squad has made sure users could easily block and report inappropriate behaviors and users, a feature that was an afterthought on many other social tools. They also made users unsearchable unless another user knows their exact username. By prioritizing the security of its primarily female audience, Squad is betting girls will continue coming back to the app and telling their friends about it.

“It’s possible to make girls feel safe and still have growth as a consumer product,” she said. “If people don’t feel safe on your app, they won’t stick around long-term.”

A new playbook

Squad quietly launched in January after pivoting away from building an information-sharing tool called Molly, which was backed with $1.5 million from BBG, Betaworks, CrunchFund and Halogen Ventures. Crawford’s now 14-year-old daughter unintentionally inspired the transition, when she proposed her mom create an app where she could peer into her best friend’s phones from afar.

IMG 2588

This reporter and Squad CEO Esther Crawford discuss the startup’s growth via Squad video chat.

Using Squad, people can browse memes, pore through DMs, plan a trip on Airbnb, peruse Tinder or a photo album with a friend via its video chat and screen share features. As Crawford describes it, it’s all the stuff you don’t want to post to Snap or Instagram but want to show your best friends. An app that may seem frivolous or non-essential seems to have quickly become a space online where girls can are opting to spend hours intimately engaged with their friends — without fear of stumbling into a troll.

“People can use this digital tech to hang out together instead of it being so performative,” Crawford said.

The downside of Squad’s screen sharing capabilities is a user can view another user’s Facebook friend’s profile, even if, say, they themselves were blocked from viewing that content. Most apps are available for viewing through screen share aside from premium video streaming apps like Netflix or Amazon Prime Video, so its entirely possible someone could use Squad solely for the purpose of viewing social content they are otherwise barred from seeing. In response to this possibility, Crawford says they are considering alerting users when their Squad chat’s been screen-shotted. To avoid additional privacy issues, Squad users can’t record or save anything from their calls or replay what happened on Squad.

Like many early-stage startups, the company isn’t making any money yet because the app is free and without ads. As soon as next year, however, Squad plans to monetize the product with in-app purchasing, scraping another rule from the consumer social playbook that has long encouraged companies to expand their user base first before trying to profit off users at all. (See: The Snapchat Monetization Problem).

Techno-optimism

Crawford, a product marketing veteran, grew up in a cult in Oregon where girls were barred from wearing makeup and from watching television or listening to music. But because the internet was so early, the dangers of it were yet to be discovered and miraculously, she was allowed to go online. Quickly, she made connections with people all over the world thanks to everyone’s favorite messaging tool at the time, AOL Instant Messenger.

The experience planted in her a deep love for the internet and a desire to share her life online. After developing a community through AIM, Crawford became one of the very first original content creators on YouTube and garnered millions of views on her videos. Without trying, she became an influencer, long before the term entered the zeitgeist.

Squad Screensharing1

She used her newfound digital prowess to launch one of the first social marketing agencies, where her clients included Weight Watchers and K-Mart, legacy brands that had no idea how to tap into her native digital communities. Ultimately, Crawford landed in the tech startup world, hopping from Series A startup to Series A startup, offering up her product marketing skills before her daughter’s idea prompted her to go into business on her own again.

“I’m a techno-optimist and yet, so many of these tech companies we thought were going to connect people turned out to have accidentally made people more lonely,” she said. “With a different lense and approach, I thought there could be an app that built bridges.”

Now with a new bout of funding, Squad can implement strategic marketing campaigns, continue adding integrations with complementary platforms (the startup has just announced a new integration with YouTube) and hire product designers. The next few years will be critical to Squad’s success as it looks to young people to give them a permanent spot on their home screen.

For Crawford, what’s most important, aside from growing group of teenagers using Squad, is to make sure only good people see a big payday thanks to her great idea: “I am ready to do everything I can to make Squad successful and make sure our success has a positive downstream effect so that we have great people on our team that get rich off our success.”

Facebook still full of groups trading fake reviews, says consumer group

Facebook has failed to clean up the brisk trade in fake product reviews taking place on its platform, an investigation by the consumer association Which? has found.

In June both Facebook and eBay were warned by the UK’s Competition and Markets Authority (CMA) they needed to do more to tackle the sale of fake product reviews. On eBay sellers were offering batches of five-star product reviews in exchange for cash, while Facebook’s platform was found hosting multiple groups were members solicited writers of fake reviews in exchange for free products or cash (or both).

A follow-up look at the two platforms by Which? has found a “significant improvement” in the number of eBay listings selling five-star reviews — with the group saying it found just one listing selling five-star reviews after the CMA’s intervention.

But little appears to have been done to prevent Facebook groups trading in fake reviews — with Which? finding dozens of Facebook groups that it said “continue to encourage incentivised reviews on a huge scale”.

Here’s a sample ad we found doing a ten-second search of Facebook groups… (one of a few we saw that specify they’re after US reviewers)

Screenshot 2019 08 06 at 09.53.19

Which? says it found more than 55,000 new posts across just nine Facebook groups trading fake reviews in July, which it said were generating hundreds “or even thousands” of posts per day.

It points out the true figure is likely to be higher because Facebook caps the number of posts it quantifies at 10,000 (and three of the ten groups had hit that ceiling).

Which? also found Facebook groups trading fake reviews that had sharply increased their membership over a 30-day period, adding that it was “disconcertingly easy to find dozens of suspicious-looking groups in minutes”.

We also found a quick search of Facebook’s platform instantly serves a selection of groups soliciting product reviews…

Screenshot 2019 08 06 at 09.51.09

Which? says looked in detail at ten groups (it doesn’t name the groups), all of which contained the word ‘Amazon’ in their group name, finding that all of them had seen their membership rise over a 30-day period — with some seeing big spikes in members.

“One Facebook group tripled its membership over a 30-day period, while another (which was first started in April 2018) saw member numbers double to more than 5,000,” it writes. “One group had more than 10,000 members after 4,300 people joined it in a month — a 75% increase, despite the group existing since April 2017.”

Which? speculates that the surge in Facebook group members could be a direct result of eBay cracking down on fake reviews sellers on its own platform.

“In total, the 10 [Facebook] groups had a staggering 105,669 members on 1 August, compared with a membership of 85,647 just 30 days prior to that — representing an increase of nearly 19%,” it adds.

Across the ten groups it says there were more than 3,500 new posts promoting inventivised reviews in a single day. Which? also notes that Facebook’s algorithm regularly recommended similar groups to those that appeared to be trading in fake reviews — on the ‘suggested for you’ page.

It also says it found admins of groups it joined listing alternative groups to join in case the original is shut down.

Commenting in a statement, Natalie Hitchins, Which?’s head of products and services, said: ‘Our latest findings demonstrate that Facebook has systematically failed to take action while its platform continues to be plagued with fake review groups generating thousands of posts a day.

“It is deeply concerning that the company continues to leave customers exposed to poor-quality or unsafe products boosted by misleading and disingenuous reviews. Facebook must immediately take steps to not only address the groups that are reported to it, but also proactively identify and shut down other groups, and put measures in place to prevent more from appearing in the future.”

“The CMA must now consider enforcement action to ensure that more is being done to protect people from being misled online. Which? will be monitoring the situation closely and piling on the pressure to banish these fake review groups,” she added.

Responding to Which?‘s findings in a statement, CMA senior director George Lusty said: “It is unacceptable that Facebook groups promoting fake reviews seem to be reappearing. Facebook must take effective steps to deal with this problem by quickly removing the material and stop it from resurfacing.”

“This is just the start – we’ll be doing more to tackle fake and misleading online reviews,” he added. “Lots of us rely on reviews when shopping online to decide what to buy. It is important that people are able to trust they are genuine, rather than something someone has been paid to write.”

In a statement Facebook claimed it has removed 9 out of ten of the groups Which? reported to it and claimed to be “investigating the remaining group”.

“We don’t allow people to use Facebook to facilitate or encourage false reviews,” it added. “We continue to improve our tools to proactively prevent this kind of abuse, including investing in technology and increasing the size of our safety and security team to 30,000.”

Libra, Facebook’s global digital currency plan, is fuzzy on privacy, watchdogs warn

Privacy commissioners from the Americas, Europe, Africa and Australasia have put their names to a joint statement raising concerns about a lack of clarity from Facebook over how data protection safeguards will be baked into its planned cryptocurrency project, Libra.

Facebook officially unveiled its big bet to build a global digital currency using blockchain technology in June, steered by a Libra Association with Facebook as a founding member. Other founding members include payment and tech giants such as Mastercard, PayPal, Uber, Lyft, eBay, VC firms including Andreessen Horowitz, Thrive Capital and Union Square Ventures, and not-for-profits such as Kiva and Mercy Corps.

At the same time Facebook announced a new subsidiary of its own business, Calibra, which it said will create financial services for the Libra network, including offering a standalone wallet app that it expects to bake into its messaging apps, Messenger and WhatsApp, next year — raising concerns it could quickly gain a monopolistic hold over what’s being couched as an ‘open’ digital currency network, given the dominance of the associated social platforms where it intends to seed its own wallet.

In its official blog post hyping Calibra Facebook avoided any talk of how much market power it might wield via its ability to promote the wallet to its existing 2.2BN+ global users, but it did touch on privacy — writing “we’ll also take steps to protect your privacy” by claiming it would not share “account information or financial data with Facebook or any third party without customer consent”.

Except for when it admitted it would; the same paragraph states there will be “limited cases” when it may share user data. These cases will “reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra”, the blog adds. (A Calibra Customer Commitment provides little more detail than a few sample instances, such as “preventing fraud and criminal activity”.)

All of that might sound reassuring enough on the surface but Facebook has used the fuzzy notion of needing to keep its users ‘safe’ as an umbrella justification for tracking non-Facebook users across the entire mainstream Internet, for example.

So the devil really is in the granular detail of anything the company claims it will and won’t do.

Hence the lack of comprehensive details about Libra’s approach to privacy and data protection is causing professional watchdogs around the world to worry.

“As representatives of the global community of data protection and privacy enforcement authorities, collectively responsible for promoting the privacy of many millions of people around the world, we are joining together to express our shared concerns about the privacy risks posed by the Libra digital currency and infrastructure,” they write. “Other authorities and democratic lawmakers have expressed concerns about this initiative. These risks are not limited to financial privacy, since the involvement of Facebook Inc., and its expansive categories of data collection on hundreds of millions of users, raises additional concerns. Data protection authorities will also work closely with other regulators.”

Among the commissioners signing the statement is the FTC’s Rohit Chopra: One of two commissioners at the US Federal Trade Commission who dissented from the $5BN settlement order that was passed by a 3:2 vote last month

Also raising concerns about Facebook’s transparency about how Libra will comply with privacy laws and expectations in multiple jurisdictions around the world are: Canada’s privacy commissioner Daniel Therrien; the European Union’s data protection supervisor, Giovanni Buttarelli; UK Information commissioner, Elizabeth Denham; Albania’s information and data protection commissioner, Besnik Dervishi; the president of the Commission for Information Technology and Civil Liberties for Burkina Faso, Marguerite Ouedraogo Bonane; and Australia’s information and privacy commissioner, Angelene Falk.

In the joint statement — on what they describe as “global privacy expectations of the Libra network” — they write:

In today’s digital age, it is critical that organisations are transparent and accountable for their personal information handling practices. Good privacy governance and privacy by design are key enablers for innovation and protecting data – they are not mutually exclusive. To date, while Facebook and Calibra have made broad public statements about privacy, they have failed to specifically address the information handling practices that will be in place to secure and protect personal information. Additionally, given the current plans for a rapid implementation of Libra and Calibra, we are surprised and concerned that this further detail is not yet available. The involvement of Facebook Inc. as a founding member of the Libra Association has the potential to drive rapid uptake by consumers around the globe, including in countries which may not yet have data protection laws in place. Once the Libra Network goes live, it may instantly become the custodian of millions of people’s personal information. This combination of vast reserves of personal information with financial information and cryptocurrency amplifies our privacy concerns about the Libra Network’s design and data sharing arrangements.

We’ve pasted the list of questions they’re putting to the Libra Network below — which they specify is “non-exhaustive”, saying individual agencies may follow up with more “as the proposals and service offering develops”.

Among the details they’re seeking answers to is clarity on what users personal data will be used for and how users will be able to control what their data is used for.

The risk of dark patterns being used to weaken and undermine users’ privacy is another stated concern.

Where user data is shared the commissioners are also seeking clarity on the types of data and the de-identification techniques that will be used — on the latter researchers have demonstrated for years that just a handful of data points can be used to re-identify credit card users from an ‘anonymous’ data-set of transactions, for example.

Here’s the full list of questions being put to the Libra Network:

  • 1. How can global data protection and privacy enforcement authorities be confident that the Libra Network has robust measures to protect the personal information of network users? In particular, how will the Libra Network ensure that its participants will:

    • a. provide clear information about how personal information will be used (including the use of profiling and algorithms, and the sharing of personal information between members of the Libra Network and any third parties) to allow users to provide specific and informed consent where appropriate;
    • b. create privacy-protective default settings that do not use nudge techniques or “dark patterns” to encourage people to share personal data with third parties or weaken their privacy protections;
    • c. ensure that privacy control settings are prominent and easy to use;
    • d. collect and process only the minimum amount of personal information necessary to achieve the identified purpose of the product or service, and ensure the lawfulness of the processing;
    • e. ensure that all personal data is adequately protected; and
    • f. give people simple procedures for exercising their privacy rights, including deleting their accounts, and honouring their requests in a timely way.
  • 2. How will the Libra Network incorporate privacy by design principles in the development of its infrastructure?

  • 3. How will the Libra Association ensure that all processors of data within the Libra Network are identified, and are compliant with their respective data protection obligations?

  • 4. How does the Libra Network plan to undertake data protection impact assessments, and how will the Libra Network ensure these assessments are considered on an ongoing basis?

  • 5. How will the Libra Network ensure that its data protection and privacy policies, standards and controls apply consistently across the Libra Network’s operations in all jurisdictions?

  • 6. Where data is shared amongst Libra Network members:

    • a. what data elements will be involved?

    • b. to what extent will it be de-identified, and what method will be used to achieve de-identification?
      c. how will Libra Network ensure that data is not re-identified, including by use of enforceable contractual commitments with those with whom data is shared?

We’ve reached out to Facebook for comment.

8chan’s new internet host was kicked off its own host just hours later

The bottom-feeding forum 8chan, which grew popular by embracing fringe hateful internet cultures, is having trouble staying online. After Cloudflare dropped its protection of the site yesterday, 8chan adopted the services of Bitmitigate, but soon lost that too as the company providing Bitmitigate with services dropped them. Deplatforming works, but it can be complicated, so here’s a quick explanation of what these pieces are and why we’re witnessing this hot-potato act in the wake of the latest tragic mass shootings.

To put a website online, people generally need three things.

First, a name registrar. This is the company that officially owns and licenses to you the specific series of letters and numbers that make up your website’s name, like techcrunch.com.

Second, a domain name service. These do work in the background to turn requests, like putting facebook.com into their browser bar, into actions: finding the IP address where Facebook is and establishing a connection between that one and the user’s.

Third, an actual server. Your data has to physically be stored somewhere with a fat pipe to the internet so others can access it. Servers are usually “virtualized” in that you don’t really rent five computers somewhere but rather a certain amount of capacity on a huge shared server farm.

Increasingly a fourth piece is necessary: caching and denial-of-service attack protection. This is a service like Cloudflare’s, which sits in front of the website and sort of sifts the traffic so attacks are turned away and the website stays up even during other kinds of outages. It’s not required, but is highly recommended.

When 8chan lost Cloudflare, it was exposed to the full force of the internet, likely including DDoS and other attacks, and was brought offline. But it soon found a new caching service in Bitmitigate.

Bitmitigate is one of several related businesses that provide various hosting services, all flying under the banner of one Rob Monster. In a statement to TechCrunch, Monster said that his companies “fill the ever growing need for a neutral service provider that will not arbitrarily terminate accounts based on social or political pressure.”

As evidence of this, Monster’s Epik domain name and hosting service is the current refuge of Gab, the right-wing social network populated by those excommunicated from Facebook, Twitter and other services with robust hate speech and abuse rules. Same for Daily Stormer, the white supremacist news site and forum. If they aren’t breaking the law, Monster said, it’s up to the provider whether to host them, and he chose to host. That may change, though.

“We have also not made a definitive decision about whether to provide DDoS mitigation or Content Delivery services for them. We will evaluate this in the coming days,” Monster wrote.

So 8chan went to Bitmitigate, but it wasn’t long before the forum had that rug pulled out from under them as well. Turns out that Epik and Bitmitigate were purchasing services from a larger service provider called Voxility.

If this sounds over-complicated, just think of it this way: A cafe needs to provide internet to its customers, so it buys a high-speed connection from an ISP. Then it provides access to that connection to its customers using its own little portal or control method, maybe so you have to buy a coffee before you can get online. This is a bit like that: Epik was reselling the services of Voxility at a markup to a specific set of customers. It’s a common enough thing online, but as we saw today, a bit risky.

Turns out Voxility wants no part of hosting 8chan, and after being alerted (by former Facebook CSO Alex Stamos) that one of its clients had decided to do so, it simply pulled the plug on Epik’s services; right now Bitmitigate, Daily Stormer and 8chan are all down. They deplatformed the platform.

See, the problem with bigger service providers is they like to limit their exposure to things like 8chan, which are bad optics waiting to happen. If you’re the host of a service to which mass murderers frequently post their pre-shooting screeds to an adoring audience of conspiracy theorists and incels, people might just take their business elsewhere. There’s no shortage of options.

So the larger these services get, the more likely it is they will have something in place to give them carte blanche to kick off or refuse service to sites and actors they believe to be bad business. It’s a bit sad that deplatforming hate has to have a business case, but for now let’s just be happy that case exists.

A hate-promoting site doesn’t just have to find someone who will provide each of the critical services listed at the start, but will provide them to a high-risk client for a reasonable price. That’s getting to be rather difficult.

As of this writing, 8chan is still down and Bitmitigate is still recovering from having its services yanked by Voxility. Who will host the hosts? Increasingly few internet services companies want to be involved with toxic internet subcultures and even real-life toxic cultures like white supremacy.

While as many have pointed out this does create new problems, it also does a pretty good number on some of the problems we’ve already got. I’ll take that over inaction any day.

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

Cloudflare will stop service to 8chan, which CEO Matthew Prince describes as a ‘cesspool of hate’

Website infrastructure and security services provider Cloudflare will stop providing service to 8chan, wrote Matthew Prince in a blog post, describing the site as a “cesspool of hate.” Service will be terminated as of midnight Pacific Time.

“The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths,” wrote Prince. “Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit.”

The decision was made after the suspect in this weekend’s mass shooting at El Paso posted a lengthy racist and anti-immigration “manifesto” to 8chan almost immediately before the attack, which killed at least 20 people. Federal authorities are treating the shooting as an act of domestic terrorism and the Justice Department is also considering bringing federal hate crime and firearm charges, which both potentially carry the death penalty, against the shooter.

8chan was also used by the perpetrator in March’s terrorist attacks on two Christchurch, New Zealand mosques, as well as the suspect in the April shooting at a synagogue in Poway, California.

“The El Paso shooter specifically referenced the Christchurch incident and appears to have been inspired by the largely unmoderated discussions on 8chan which glorified the previous massacre,” wrote Prince. “In a separate tragedy, the suspected killer in the Poway, California synagogue shooting also posted a hate-filled ‘open letter’ on 8chan. 8chan has repeatedly proven itself to be a cesspool of hate.”

Before Cloudflare announced its decision to terminate service to 8chan, Prince spoke to reporters from The Guardian and The New York Times, telling The Guardian that he wanted to “kick 8chan off our network,” but also (in the later interview with The New York Times), expressed hesitation because terminating service may make it harder for law enforcement officials to access information on the site.

(8chan creator Fredrick Brennan, who intended the site to be a free speech alternative to message board 4chan but has now distanced himself from the site and its current owners, told The New York Times he now wants it to be shut down).

In his blog post, Prince explained Cloudflare’s ultimate decision to cut service, writing that more than 19 million internet properties use Cloudflare’s services and the company “[did] not take this decision lightly.”

“We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line,” he wrote.” It will therefore no longer be allowed to use our services.”

This is not the first time Cloudflare has cut off service to a site for enabling the spread of racism and violence. Cloudflare previously terminated service to white supremacist site Daily Stormer in August 2017, but noted that the site went back online after switching to a Cloudflare competitor. “Today, the Daily Stormer is still available and still disgusting. They have bragged that they have more readers than ever. They are no longer Cloudflare’s problem, but they remain the Internet’s problem,” Prince wrote.

Prince says he sees the situation with 8chan playing out in a similar way. Since terminating service to the Daily Stormer, Prince says Cloudflare has worked with law enforcement and civil society organizations, resulting in the company “cooperating around monitoring potential hate sites on our network and notifying law enforcement when there was content that contained a legal process to share information when we can hopefully prevent horrific acts of violence.”

But Prince added that the company “continue[s] to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” adding that this is not “due to some conception of the United States’ First Amendment,” since Cloudflare is a private company (and most of its customers, and more than half of its revenue, are outside the United States).

Instead, Cloudflare “will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in those countries through due process of law. And we will comply with those boundaries when and where they are set.”

Cloudflare’s decision may increase scrutiny on Amazon, since 8chan’s operator Jim Watkins sells audiobooks on Amazon.com and Audible, creating what the Daily Beast refers to as “his financial lifeline to the outside world.”

Instagram and Facebook are experiencing outages

Users reported issues with Instagram and Facebook Sunday morning.

[Update as of 12:45 p.m. pacific] Facebook says the outage affecting its apps has been resolved.

“Earlier today some people may have had trouble accessing the Facebook family of apps due to a networking issue. We have resolved the issue and are fully back up, we apologize for the inconvenience,” a Facebook company spokesperson said in a statement provided to TechCrunch.

The mobile apps wouldn’t load for many users beginning in the early hours of the morning, prompting thousands to take to Twitter to complain about the outage. #facebookdown and #instagramdown are both trending on Twitter at time of publish.