Tag Archives: United Kingdom

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Facebook still full of groups trading fake reviews, says consumer group

Facebook has failed to clean up the brisk trade in fake product reviews taking place on its platform, an investigation by the consumer association Which? has found.

In June both Facebook and eBay were warned by the UK’s Competition and Markets Authority (CMA) they needed to do more to tackle the sale of fake product reviews. On eBay sellers were offering batches of five-star product reviews in exchange for cash, while Facebook’s platform was found hosting multiple groups were members solicited writers of fake reviews in exchange for free products or cash (or both).

A follow-up look at the two platforms by Which? has found a “significant improvement” in the number of eBay listings selling five-star reviews — with the group saying it found just one listing selling five-star reviews after the CMA’s intervention.

But little appears to have been done to prevent Facebook groups trading in fake reviews — with Which? finding dozens of Facebook groups that it said “continue to encourage incentivised reviews on a huge scale”.

Here’s a sample ad we found doing a ten-second search of Facebook groups… (one of a few we saw that specify they’re after US reviewers)

Screenshot 2019 08 06 at 09.53.19

Which? says it found more than 55,000 new posts across just nine Facebook groups trading fake reviews in July, which it said were generating hundreds “or even thousands” of posts per day.

It points out the true figure is likely to be higher because Facebook caps the number of posts it quantifies at 10,000 (and three of the ten groups had hit that ceiling).

Which? also found Facebook groups trading fake reviews that had sharply increased their membership over a 30-day period, adding that it was “disconcertingly easy to find dozens of suspicious-looking groups in minutes”.

We also found a quick search of Facebook’s platform instantly serves a selection of groups soliciting product reviews…

Screenshot 2019 08 06 at 09.51.09

Which? says looked in detail at ten groups (it doesn’t name the groups), all of which contained the word ‘Amazon’ in their group name, finding that all of them had seen their membership rise over a 30-day period — with some seeing big spikes in members.

“One Facebook group tripled its membership over a 30-day period, while another (which was first started in April 2018) saw member numbers double to more than 5,000,” it writes. “One group had more than 10,000 members after 4,300 people joined it in a month — a 75% increase, despite the group existing since April 2017.”

Which? speculates that the surge in Facebook group members could be a direct result of eBay cracking down on fake reviews sellers on its own platform.

“In total, the 10 [Facebook] groups had a staggering 105,669 members on 1 August, compared with a membership of 85,647 just 30 days prior to that — representing an increase of nearly 19%,” it adds.

Across the ten groups it says there were more than 3,500 new posts promoting inventivised reviews in a single day. Which? also notes that Facebook’s algorithm regularly recommended similar groups to those that appeared to be trading in fake reviews — on the ‘suggested for you’ page.

It also says it found admins of groups it joined listing alternative groups to join in case the original is shut down.

Commenting in a statement, Natalie Hitchins, Which?’s head of products and services, said: ‘Our latest findings demonstrate that Facebook has systematically failed to take action while its platform continues to be plagued with fake review groups generating thousands of posts a day.

“It is deeply concerning that the company continues to leave customers exposed to poor-quality or unsafe products boosted by misleading and disingenuous reviews. Facebook must immediately take steps to not only address the groups that are reported to it, but also proactively identify and shut down other groups, and put measures in place to prevent more from appearing in the future.”

“The CMA must now consider enforcement action to ensure that more is being done to protect people from being misled online. Which? will be monitoring the situation closely and piling on the pressure to banish these fake review groups,” she added.

Responding to Which?‘s findings in a statement, CMA senior director George Lusty said: “It is unacceptable that Facebook groups promoting fake reviews seem to be reappearing. Facebook must take effective steps to deal with this problem by quickly removing the material and stop it from resurfacing.”

“This is just the start – we’ll be doing more to tackle fake and misleading online reviews,” he added. “Lots of us rely on reviews when shopping online to decide what to buy. It is important that people are able to trust they are genuine, rather than something someone has been paid to write.”

In a statement Facebook claimed it has removed 9 out of ten of the groups Which? reported to it and claimed to be “investigating the remaining group”.

“We don’t allow people to use Facebook to facilitate or encourage false reviews,” it added. “We continue to improve our tools to proactively prevent this kind of abuse, including investing in technology and increasing the size of our safety and security team to 30,000.”

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

Muzmatch adds $7M to swipe right on Muslim majority markets

Muzmatch, a matchmaking app for Muslims, has just swiped a $7 million Series A on the back of continued momentum for its community sensitive approach to soulmate searching for people of the Islamic faith.

It now has more than 1.5M users of its apps, across 210 countries, swiping, matching and chatting online as they try to find ‘the one’.

The funding, which Muzmatch says will help fuel growth in key international markets, is jointly led by US hedge fund Luxor Capital, and Silicon Valley accelerator Y Combinator — the latter having previously selected Muzmatch for its summer 2017 batch of startups. 

Last year the team also took in a $1.75M seed, led by Fabrice Grinda’s FJ Labs, YC and others.

We first covered the startup two years ago when its founders were just graduating from YC. At that time there were two of them building the business: Shahzad Younas and Ryan Brodie — a perhaps unlikely pairing in this context, given Brodie’s lack of a Muslim background. He joined after meeting Younas, who had earlier quit his job as an investment banker to launch Muzmatch. Brodie got excited by the idea and early traction for the MVP. The pair went on to ship a relaunch of the app in mid 2016 which helped snag them a place at YC.

So why did Younas and Brodie unmatch? All the remaining founder can say publicly is that its investors are buying Brodie’s stake. (While, in a note on LinkedIn — celebrating what he dubs the “bittersweet” news of Muzmatch’s Series A — Brodie writes: “Separate to this raise I decided to sell my stake in the company. This is not from a lack of faith — on the contrary — it’s simply the right time for me to move on to startup number 4 now with the capital to take big risks.”)

Asked what’s harder, finding a steady co-founder or finding a life partner, Younas responds with a laugh. “With myself and Ryan, full credit, when we first joined together we did commit to each other, I guess, a period of time of really going for it,” he ventures, reaching for the phrase “conscious uncoupling” to sum up how things went down. “We both literally put blood sweat and tears into the app, into growing what it is. And for sure without him we wouldn’t be as far as we are now, that’s definitely true.”

“For me it’s a fantastic outcome for him. I’m genuinely super happy for him. For someone of his age and at that time of his life — now he’s got the ability to start another startup and back himself, which is amazing. Not many people have that opportunity,” he adds.

Younas says he isn’t looking for another co-founder at this stage of the business. Though he notes they have just hired a CTO — “purely because there’s so much to do that I want to make sure I’ve got a few people in certain areas”.

The team has grown from just four people seven months ago to 17 now. With the Series A the plan is to further expand headcount to almost 30.

“In terms of a co-founder, I don’t think, necessarily, at this point it’s needed,” Younas tells TechCrunch. “I obviously understand this community a lot. I’ve equally grown in terms of my role in the company and understanding various parts of the company. You get this experience by doing — so now I think definitely it helps having the simplicity of a single founder and really guiding it along.”

Despite the co-founders parting ways that’s no doubting Muzmatch’s momentum. Aside from solid growth of its user base (it was reporting ~200k two years ago), its press release touts 30,000+ “successes” worldwide — which Younas says translates to people who have left the app and told it they did so because they met someone on Muzmatch.

He reckons at least half of those left in order to get married — and for a matchmaking app that is the ultimate measure of success.

“Everywhere I go I’m meeting people who have met on Muzmatch. It has been really transformative for the Muslim community where we’ve taken off — and it is amazing to see, genuinely,” he says, suggesting the real success metric is “much higher because so many people don’t tell us”.

Nor is he worried about being too successful, despite 100 people a day leaving because they met someone on the app. “For us that’s literally the best thing that can happen because we’ve grown mostly by word of mouth — people telling their friends I met someone on your app. Muslim weddings are quite big, a lot of people attend and word does spread,” he says.

Muzmatch was already profitable two years ago (and still is, for “some” months, though that’s not been a focus), which has given it leverage to focus on growing at a pace it’s comfortable with as a young startup. But the plan with the Series A cash is to accelerate growth by focusing attention internationally on Muslim majority markets vs an early focus on markets, including the UK and the US, with Muslim minority populations.

This suggests potential pitfalls lie ahead for the team to manage growth in a sustainable way — ensuring scaling usage doesn’t outstrip their ability to maintain the ‘safe space’ feel the target users need, while at the same time catering to the needs of an increasingly diverse community of Muslim singles.

“We’re going to be focusing on Muslim majority countries where we feel that they would be more receptive to technology. There’s slightly less of a taboo around finding someone online. There’s culture changes already happening, etc.,” he says, declining to name the specific markets they’ll be fixing on. “That’s definitely what we’re looking for initially. That will obviously allow us to scale in a big way going forward.

“We’ve always done [marketing] in a very data-driven way,” he adds, discussing his approach to growth. “Up til now I’ve led on that. Pretty much everything in this company I’ve self taught. So I learnt, essentially, how to build a growth engine, how to scale an optimize campaigns, digital spend, and these big guys have seen our data and they’re impressed with the progress we’ve made, and the customer acquisition costs that we’ve achieved — considering we really are targeting quite a niche market… Up til now we closed our Series A with more than half our seed round in our accounts.”

Muzmatch has also laid the groundwork for the planned international push, having already fully localized the app — which is live in 14 languages, including right to left languages like Arabic.

“We’re localized and we get a lot of organic users everywhere but obviously once you focus on a particular area — in terms of content, in terms of your brand etc — then it really does start to take off,” adds Younas.

The team’s careful catering to the needs of its target community — via things like manual moderation of every profile and offering an optional chaperoning feature for in-app chats — i.e. rather than just ripping out a ‘Tinder for Muslims’ clone, can surely take some credit for helping to grow the market for Muslim matchmaking apps overall.

“Shahzad has clearly made something that people want. He is a resourceful founder who has been listening to his users and in the process has developed an invaluable service for the Muslim community, in a way that mainstream companies have failed to do,” says YC partner Tim Brady in a supporting statement. 

But the flip side of attracting attention and spotlighting a commercial opportunity means Muzmatch now faces increased competition — such as from the likes of Dubai-based Veil: A rival matchmaking app which has recently turned heads with a ‘digital veil’ feature that applies an opaque filter to all profile photos, male and female, until a mutual match is made.

Muzmatch also lets users hide their photos, if they choose. But it has resisted imposing a one-size-fits-all template on the user experience — exactly in order that it can appeal more broadly, regardless of the user’s level of religious adherence (it has even attracted non-Muslim users with a genuine interest in meeting a life partner).

Younas says he’s not worried about fresh faces entering the same matchmaking app space — couching it as a validation of the market.

He’s also dismissive of gimmicky startups that can often pass through the dating space, usually on a fast burn to nowhere. Though he is expecting more competition from major players, such as Tinder-owner Match, which he notes has been eyeing up some of the same geographical markets.

“We know there’s going to be attention in this area,” he says. “Our goal is to basically continue to be the dominant player but for us to race ahead in terms of the quality of our product offering and obviously our size. That’s the goal. Having this investment definitely gives us that ammo to really go for it. But by the same token I’d never want us to be that silly startup that just burns a tonne of money and ends up nowhere.”

“It’s a very complex population, it’s very diverse in terms of culture, in terms of tradition,” he adds of the target market. “We so far have successfully been able to navigate that — of creating a product that does, to the user, marries technology with respecting the faith.”

Feature development is now front of mind for Muzmatch as it moves into the next phase of growth, and as — Younas hopes — it has more time to focus on finessing what its product offers, having bagged investment by proving product market fit and showing traction.

“The first thing that we’re going to be doing is an actual refreshing of our brand,” he says. “A bit of a rebrand, keeping the same name, a bit of a refresh of our brand, tidying that up. Actually refreshing the app, top to bottom. Part of that is looking at changes that have happened in the — call it — ‘dating space’. Because what we’ve always tried to do is look at the good that’s happening, get rid of the bad stuff, and try and package it and make it applicable to a Muslim audience.

“I think that’s what we’ve done really well. And I always wanted to innovate on that — so we’ve got a bunch of ideas around a complete refresh of the app.”

Video is one area they’re experimenting with for future features. TechCrunch’s interview with Younas takes place via a video chat using what looks to be its own videoconferencing platform, though there’s not currently a feature in Muzmatch that lets users chat remotely via video.

Its challenge on this front will be implementing richer comms features in a way that a diverse community of religious users can accept.

“I want to — and we have this firmly on our roadmap, and I hope that it’s within six months — be introducing or bringing ways to connect people on our platform that they’ve never been able to do before. That’s going to be key. Elements of video is going to be really interesting,” says Younas teasing their thinking around video.

“The key for us is how do we do [videochat] in a way that is sensible and equally gives both sides control. That’s the key.”

Nor will it just be “simple video”. He says they’re also looking at how they can use profile data more creatively, especially for helping more private users connect around shared personality traits.

“There’s a lot of things we want to do within the app of really showing the richness of our profiles. One thing that we have that other apps don’t have are profiles that are really rich. So we have about 22 different data points on the profile. There’s a lot that people do and want to share. So the goal for us is how do we really try and show that off?

“We have a segment of profiles where the photos are private, right, people want that anonymity… so the goal for us is then saying how can we really show your personality, what you’re about in a really good way. And right now I would argue we don’t quite do it well enough. We’ve got a tonne of ideas and part of the rebrand and the refresh will be really emphasizing and helping that segment of society who do want to be private but equally want people to understand what they’re about.”

Where does he want the business to be in 12 months’ time? With a more polished product and “a lot of key features in the way of connecting the community around marriage — or just community in general”.

In terms of growth the aim is at least 4x where they are now.

“These are ambitious targets. Especially given the amount that we want to re-engineer and rebuild but now is the time,” he adds. “Now we have the fortune of having a big team, of having the investment. And really focusing and finessing our product… Really give it a lot of love and really give it a lot of the things we’ve always wanted to do and never quite had the time to do. That’s the key.

“I’m personally super excited about some of the stuff coming up because it’s a big enabler — growing the team and having the ability to really execute on this a lot faster.”

‘The Great Hack’: Netflix doc unpacks Cambridge Analytica, Trump, Brexit and democracy’s death

It’s perhaps not for nothing that The Great Hack – the new Netflix documentary about the connections between Cambridge Analytica, the US election and Brexit, out on July 23 – opens with a scene from Burning Man. There, Brittany Kaiser, a former employee of Cambridge Analytica, scrawls the name of the company onto a strut of ‘the temple’ that will eventually get burned in that fiery annual ritual. It’s an apt opening.

There are probably many of us who’d wish quite a lot of the last couple of years could be thrown into that temple fire, but this documentary is the first I’ve seen to expertly unpick what has become the real-world dumpster fire that is social media, dark advertising and global politics which have all become inextricably, and, often fatally, combined.

The documentary is also the first that you could plausibly recommend those of your relatives and friends who don’t work in tech, as it explains how social media – specifically Facebook – is now manipulating our lives and society, whether we like it or not.

As New York Professor David Carroll puts it at the beginning, Facebook gives “any buyer direct access to my emotional pulse” – and that included political campaigns during the Brexit referendum and the Trump election. Privacy campaigner Carroll is pivotal to the film’s story of how our data is being manipulated and essentially kept from us by Facebook.

The UK’s referendum decision to leave the European Union, in fact, became “the petri dish” for a Cambridge Analytica experiment, says Guardian journalist Carole Cadwalladr She broke the story of how the political consultancy, led by Eton-educated CEO Alexander Nix, applied techniques normally used by ‘psyops’ operatives in Afghanistan to the democratic operations of the US and UK, and many other countries, over a chilling 20+ year history. Watching this film, you literally start to wonder if history has been warped towards a sickening dystopia.

carole

The petri-dish of Brexit worked. Millions of adverts, explains the documentary, targeted individuals, exploiting fear and anger, to switch them from ‘persuadables’, as CA called them, into passionate advocates for, first Brexit in the UK, and then Trump later on.

Switching to the US, the filmmakers show how CA worked directly with Trump’s “Project Alamo” campaign, spending a million dollars a day on Facebook ads ahead of the 2016 election.

The film expertly explains the timeline of how CA had first worked off Ted Cruz’s campaign, and nearly propelled that lack-luster candidate into first place in the Republican nominations. It was then that the Trump campaign picked up on CA’s military-like operation.

After loading up the psychographic survey information CA had obtained from Aleksandr Kogan, the Cambridge University academic who orchestrated the harvesting of Facebook data, the world had become their oyster. Or, perhaps more accurately, their oyster farm.

Back in London, Cadwalladr notices triumphant Brexit campaigners fraternizing with Trump and starts digging. There is a thread connecting them to Breitbart owner Steve Bannon. There is a thread connecting them to Cambridge Analytica. She tugs on those threads and, like that iconic scene in ‘The Hurt Locker’ where all the threads pull-up unexploded mines, she starts to realize that Cambridge Analytica links them all. She needs a source though. That came in the form of former employee Chris Wylie, a brave young man who was able to unravel many of the CA threads.

But the film’s attention is often drawn back to Kaiser, who had worked first on US political campaigns and then on Brexit for CA. She had been drawn to the company by smooth-talking CEO Nix, who begged: “Let me get you drunk and steal all of your secrets.”

But was she a real whistleblower? Or was she trying to cover her tracks? How could someone who’d worked on the Obama campaign switch to Trump? Was she a victim of Cambridge Analytica, or one of its villains?

British political analyst Paul Hilder manages to get her to come to the UK to testify before a parliamentary inquiry. There is high drama as her part in the story unfolds.

Kaiser appears in various guises which vary from idealistically naive to stupid, from knowing to manipulative. It’s almost impossible to know which. But hearing about her revelation as to why she made the choices she did… well, it’s an eye-opener.

brit

Both she and Wylie have complex stories in this tale, where not everything seems to be as it is, reflecting our new world, where truth is increasingly hard to determine.

Other characters come and go in this story. Zuckerburg makes an appearance in Congress and we learn of the casual relationship Facebook had to its complicity in these political earthquakes. Although if you’re reading TechCrunch, then you will probably know at least part of this story.

Created for Netflix by Jehane Noujaim and Karim Amer, these Egyptian-Americans made “The Square”, about the Egyptian revolution of 2011. To them, the way Cambridge Analytica applied its methods to online campaigning was just as much a revolution as Egyptians toppling a dictator from Cario’s iconic Tahrir Square.

For them, the huge irony is that “psyops”, or psychological operations used on Muslim populations in Iraq and Afghanistan after the 9/11 terrorist attacks ended up being used to influence Western elections.

Cadwalladr stands head and shoulders above all as a bastion of dogged journalism, even as she is attacked from all quarters, and still is to this day.

What you won’t find out from this film is what happens next. For many, questions remain on the table: What will happen now Facebook is entering Cryptocurrency? Will that mean it could be used for dark election campaigning? Will people be paid for their votes next time, not just in Likes? Kaiser has a bitcoin logo on the back of her phone. Is that connected? The film doesn’t comment.

But it certainly unfolds like a slow-motion car crash, where democracy is the car and you’re inside it.

Italy stings Facebook with $1.1M fine for Cambridge Analytica data misuse

Italy’s data protection watchdog has issued Facebook with a €1 million (~$1.1M) fine for violations of local privacy law attached to the Cambridge Analytica data misuse scandal.

Last year it emerged that up to 87 million Facebook users had had their data siphoned out of the social media giant’s platform by an app developer working for the controversial (and now defunct) political data company, Cambridge Analytica.

The offences in question occurred prior to Europe’s tough new data protection framework, GDPR, coming into force — hence the relatively small size of the fine in this case, which has been calculated under Italy’s prior data protection regime. (Whereas fines under GDPR can scale as high as 4% of a company’s annual global turnover.)

We’ve reached out to Facebook for comment.

Last year the UK’s DPA similarly issued Facebook with a £500k penalty for the Cambridge Analytica breach, although Facebook is appealing.

The Italian regulator says 57 Italian Facebook users downloaded Dr Aleksandr Kogan‘s Thisisyourdigitallife quiz app, which was the app vehicle used to scoop up Facebook user data en masse — with a further 214,077 Italian users’ also having their personal information processed without their consent as a result of how the app could access data on each user’s Facebook friends.

In an earlier intervention in March, the Italian regulator challenged Facebook over the misuse of the data — and the company opted to pay a reduced amount of €52,000 in the hopes of settling the matter.

However the Italian DPA has decided that the scale of the violation of personal data and consent disqualifies the case for a reduced payment — so it has now issued Facebook with a €1M fine.

The sum takes into account, in addition to the size of the database, also the economic conditions of Facebook and the number of global and Italian users of the company,” it writes in a press release on its website [translated by Google Translate]. 

At the time of writing its full decision on the case was not available.

Late last year the Italian regulator fined Facebook €10M for misleading users over its sign in practices.

While, in 2017, it also slapped the company with a €3M penalty for a controversial decision to begin helping itself to WhatsApp users’ data — despite the latter’s prior claims that user data would never be shared with Facebook.

Going forward, where Facebook’s use (and potential misuse) of Europeans’ data is concerned, all eyes are on the Irish Data Protection Commission; aka its lead regulator in the region on account of the location of Facebook’s international HQ.

The Irish DPC has a full suite of open investigations into Facebook and Facebook-owned companies — covering major issues such as security breaches and questions over the legal basis it claims to process people’s data, among a number of other big tech related probes.

The watchdog has suggested decisions on some of this tech giant-related case-load could land this summer.

Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.

eBay and Facebook told to tackle trade in fake reviews

Facebook and eBay have been warned by the UK’s Competition and Markets Authority (CMA) to do more to tackle the sale of fake reviews on their platforms.

Fake reviews are illegal under UK consumer protection law.

The CMA said today it has found “trouble evidence” of a “thriving marketplace for fake and misleading online reviews”. Though it also writes that it does not believe the platforms themselves are intentionally allowing such content to appear on their sites.

The regulator says it crawled content on eBay and Facebook between November 2018 and June 2019 — finding more than 100 eBay listings offering fake reviews for sale during that time.

Over the same period it also identified 26 Facebook groups where people offered to write fake reviews or where businesses recruited people to write fake and misleading reviews on popular shopping and review sites.

The CMA cites estimates that more than three-quarters of UK Internet users consider online reviews before making a purchase decision — with “billions” of pounds’ worth of people’s spending being influenced by such content. So the incentives driving a market to trade reviews for money is clear.

Commenting in a statement, the CMA’s CEO, Andrea Coscelli, said: “We want Facebook and eBay to conduct an urgent review of their sites to prevent fake and misleading online reviews from being bought and sold.”

“Lots of us rely on reviews when shopping online to decide what to buy. It is important that people are able to trust that reviews are genuine, rather than something someone has been paid to write,” he added. “Fake reviews mean that people might make the wrong choice and end up with a product or service that’s not right for them. They’re also unfair to businesses who do the right thing.”

The regulator says that after it wrote to eBay and Facebook to inform them of its findings they have both “indicated that they will cooperate”.

Facebook also told the CMA that “most” of the 26 groups it identified have now been removed.

The regulator says expects the sites to put measures in place to ensure all the identified content is removed — and stop it from reappearing.

At the time of writing a search of ebay.co.uk for “reviews” returned sellers offering 5 star media reviews, 5 star Google reviews and 5 star Trustpilot reviews as the top three results — one of which was also a sponsored post:

Additional eBay listings included one offering “1/2/3/4/5 Star Freeindex Customer Service Review for business”, priced at £10 and sold by a UK based seller who has been an eBay member since Feb 2011; one 5 star review “on Google” which the seller touts with the line “Boost your business and get new Customers” — at a cost of £2.69; one “100% positive FAST” review for £1; and five 5 Star Reviews on Google priced at £15 — offered by a seller apparently based in Portugal who has been an eBay member since March 2014.

A search of UK Facebook groups returned multiple examples of closed groups where sellers appear to be soliciting reviews, either in exchange for goods and/or payment…

 

Reached for a response to the CMA’s call for measures to be put in place to tackle the illegal trade in fake reviews, Facebook sent us the following statement — attributed to a spokesperson:

Fraudulent activity is not allowed on Facebook, including the trading of fake reviews. We have removed 24 of the 26 groups and pages that the CMA reported to us yesterday and had already removed a number of them prior to the CMA flagging them to us. We know there is more to do which is why we’ve tripled the size of our safety and security team to 30,000 and continue to invest in technology to help proactively prevent abuse of our platform.

An eBay spokesperson also told us:

We have zero tolerance for fake or misleading reviews. We have informed the CMA that all of the sellers they identified have been suspended. The listings have been removed. Listings such as these are strictly against our policy on illegal activity and we will act where our rules are broken. We welcome the report from the CMA and will work closely with them in reviewing its findings.

Tinder adds sexual orientation and gender identity to its profiles

Tinder is adding to its profiles information about sexual orientation and gender identity.

The company worked with the LGBTQ advocacy organization GLAAD on changes to its dating app to make it more inclusive.

Users who want to edit or add more information about their sexual orientation can now simply edit their profile. When a Tinder user taps on the “orientation” selection they can choose up to three terms that describe their sexual orientation. Those descriptions can either be private or public, but will likely be used to inform matches on the app.

Tinder also updated the onboarding experience for new users so they can include their sexual orientation as soon as they sign up for the dating app.

Tinder is also giving users more control over how they order matches. In the “Discovery Preferences” field Tinderers can choose to first see people of the same orientation.

The company said this is a first step in its efforts to be more inclusive. The company will continue to work with GLAAD to refine its products and is making the new features available in the U.S., U.K., Canada, Ireland, India, Australia and New Zealand throughout June.