Tag Archives: Myanmar

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

Facebook says it filed a US lawsuit to shut down a follower-buying service in New Zealand

Facebook is cracking down on services that promise to help Instagram users buy themselves a large following on the photo app. The social network said today that it has filed a lawsuit against a New Zealand-based company that operates one such ‘follower-buying service.’

The suit is in a U.S. court and is targeting the three individuals running the company, which has not been named.

“The complaint alleges the company and individuals used different companies and websites to sell fake engagement services to Instagram users. We previously suspended accounts associated with the defendants and formally warned them in writing that they were in violation of our Terms of Use, however, their activity persisted,” Facebook wrote in a post.

We were not initially able to get a copy of the lawsuit, but have asked Facebook for further details.

This action comes months after a TechCrunch expose identified 17 follower-buyer services that were using Instagram’s own advertising network to peddle their wares to users of the service.

Instagram responded by saying it had removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services that we reported were violating its policies. However, just one day later, TechCrunch found advertising from two of the companies Instagram, while a further five were found to be paying to promote policy-violating follower-growth services.

Facebook has stepped up its efforts to crack down on “inauthentic behavior” on its platforms in recent months. That’s included removing accounts and pages from Facebook and Instagram in countries that include India, Pakistan, the Philippines, the U.K, Romania, Iran, Russia, Macedonia and Kosovo this year. Higher-profile action has included the suspension of removal of UK far-right activist Tommy Robinson from Facebook and in Myanmar, where Facebook has been much-criticized, the company banned four armed groups.

Facebook bans Myanmar military accounts for ‘enabling human rights abuses’

Facebook is cracking down on the military leadership in Myanmar, the Southeast Asian country where the social network has been identified as a factor contributing to ethnic tension and violence.

The U.S. company said today that it removed accounts belonging to Senior General Min Aung Hlaing, who is commander-in-chief of the armed forces, and the military-owned Myawady television network.

In total, the purge has swept up 18 Facebook accounts, 52 Facebook Pages and an Instagram account after the company “found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country.”

Some 30 million of Myanmar’s 50 million population is estimated to use Facebook, making it a hugely effective broadcast network. But with wide reach comes the potential with misuse, as has been most evident in the U.S.

But the Facebook effect is also huge far from the U.S. A report from the UN issued in March determined that Facebook had played a “determining role” in Myanmar’s crisis. The situation in the country is so severe that an estimated 700,000 Rohingya Muslim refugees are thought to have fled to neighboring Bangladesh following a Myanmar government crackdown that began in August. U.S. Secretary of State Rex Tillerson has labeled the actions as ethnic cleansing.

Facebook’s action today comes a week after an investigative report from Reuters found more than 1,000 posts, comments and images that attacked Rohingya and other Muslim users on the platform.

“During a recent investigation, we discovered that they used seemingly independent news and opinion Pages to covertly push the messages of the Myanmar military. This type of behavior is banned on Facebook because we want people to be able to trust the connections they make,” Facebook said in a statement.

“While we were too slow to act, we’re now making progress – with better technology to identify hate speech, improved reporting tools, and more people to review content,” it added.

Fake news inquiry calls for social media levy to defend democracy

A UK parliamentary committee which has been running a multi-month investigation into the impact of online disinformation on political campaigning — and on democracy itself — has published a preliminary report highlighting what it describes as “significant concerns” over the risks to “shared values and the integrity of our democratic institutions”.

It’s calling for “urgent action” from government and regulatory bodies to “build resilience against misinformation and disinformation into our democratic system”.

“We are faced with a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views,” the DCMS committee warns. “In particular, we heard evidence of Russian state-sponsored attempts to influence elections in the US and the UK through social media, of the efforts of private companies to do the same, and of law-breaking by certain Leave campaign groups in the UK’s EU Referendum in their use of social media.”

The inquiry, which was conceived of and begun in the previous UK parliament — before relaunching in fall 2017, after the June General Election — has found itself slap-bang in the middle of one of the major scandals of the modern era, as revelations about the extent of disinformation and social media data misuse and allegations of election fiddling and law bending have piled up thick and fast, especially in recent months (albeit, concerns have been rising steadily, ever since the 2016 US presidential election and revelations about the cottage industry of fake news purveyors spun up to feed US voters, in addition to Kremlin troll farm activity.)

Yet the Facebook-Cambridge Analytica data misuse saga (which snowballed into a major global scandal this March) is just one of the strands of the committee’s inquiry. Hence they’ve opted to publish multiple reports — the initial one recommending urgent actions for the government and regulators, which will be followed by another report covering the inquiry’s “wider terms of reference” and including a closer look at the role of advertising. (The latter report is slated to land in the fall.)

For now, the committee is suggesting “principle-based recommendations” designed to be “sufficiently adaptive to deal with fast-moving technological developments”. 

Among a very long list of recommendations are:

  • a levy on social media and tech giants to fund expanding a “major investment” in the UK’s data watchdog so the body is able to “attract and employ more technically-skilled engineers who not only can analyse current technologies, but have the capacity to predict future technologies” — with the tech company levy operating in “a similar vein to the way in which the banking sector pays for the upkeep of the Financial Conduct Authority”. Additionally, the committee also wants the government put forward proposals for an educational levy to be raised by social media companies, “to finance a comprehensive educational framework (developed by charities and non-governmental organisations) and based online”. “Digital literacy should be the fourth pillar of education, alongside reading, writing and maths,” the committee writes. “The DCMS Department should co-ordinate with the Department for Education, in highlighting proposals to include digital literacy, as part of the Physical, Social, Health and Economic curriculum (PSHE). The social media educational levy should be used, in part, by the government, to finance this additional part of the curriculum.” It also wants to see a rolling unified public awareness initiative, part-funded by a tech company levy, to “set the context of social media content, explain to people what their rights over their data are… and set out ways in which people can interact with political campaigning on social media. “The public should be made more aware of their ability to report digital campaigning that they think is misleading, or unlawful,” it adds
  • amendments to UK Electoral Law to reflect use of new technologies — with the committee backing the Electoral Commission’s suggestion that “all electronic campaigning should have easily accessible digital imprint requirements, including information on the publishing organisation and who is legally responsible for the spending, so that it is obvious at a glance who has sponsored that campaigning material, thereby bringing all online advertisements and messages into line with physically published leaflets, circulars and advertisements”. It also suggests government should “consider the feasibility of clear, persistent banners on all paid-for political adverts and videos, indicating the source and making it easy for users to identify what is in the adverts, and who the advertiser is”. And urges the government to carry out “a comprehensive review of the current rules and regulations surrounding political work during elections and referenda, including: increasing the length of the regulated period; definitions of what constitutes political campaigning; absolute transparency of online political campaigning; a category introduced for digital spending on campaigns; reducing the time for spending returns to be sent to the Electoral Commission (the current time for large political organisations is six months)”.
  • the Electoral Commission to establish a code for advertising through social media during election periods “giving consideration to whether such activity should be restricted during the regulated period, to political organisations or campaigns that have registered with the Commission”. It also urges the Commission to propose “more stringent requirements for major donors to demonstrate the source of their donations”, and backs its suggestion of a change in the rules covering political spending so that limits are put on the amount of money an individual can donate
  • a major increase in the maximum fine that can be levied by the Electoral Commission (currently just £20,000) — saying this should rather be based on a fixed percentage of turnover. It also suggests the body should have the ability to refer matters to the Crown Prosecution Service, before their investigations have been completed; and urges the government to consider giving it the power to compel organisations that it does not specifically regulate, including tech companies and individuals, to provide information relevant to their inquiries, subject to due process.
  • a public register for political advertising — “requiring all political advertising work to be listed for public display so that, even if work is not requiring regulation, it is accountable, clear, and transparent for all to see”. So it also wants the government to conduct a review of UK law to ensure that digital campaigning is defined in a way that includes online adverts that use political terminology but are not sponsored by a specific political party.
  • a ban on micro-targeted political advertising to lookalikes online, and a minimum limit for the number of voters sent individual political messages to be agreed at a national level. The committee also suggests the Electoral Commission and the ICO should consider the ethics of Facebook or other relevant social media companies selling lookalike political audiences to advertisers during the regulated period, saying they should consider whether users should have the right to opt out from being included in such lookalike audiences
  • a recommendation to formulate a new regulatory category for tech companies that is not necessarily either a platform or a publisher, and which “tightens tech companies’ liabilities”
  • a suggestion that the government consider (as part of an existing review of digital advertising) whether the Advertising Standards Agency could regulate digital advertising. “It is our recommendation that this process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms,” the committee writes. “This should include both content that has been referred to them for takedown by their users, and other content that should have been easy for the tech companies to identify for themselves. In these cases, failure to act on behalf of the tech companies could leave them open to legal proceedings launched either by a public regulator, and/or by individuals or organisations who have suffered as a result of this content being freely disseminated on a social media platform.”
  • another suggestion that the government consider establishing a “digital Atlantic Charter as a new mechanism to reassure users that their digital rights are guaranteed” — with the committee also raising concerns that the UK risks a privacy loophole opening up after it leave the EU when US-based companies will be able to take UK citizens’ data to the US for processing without the protections afforded by the EU’s GDPR framework (as the UK will then be a third country)
  • a suggestion that a professional “global Code of Ethics” should be developed by tech companies, in collaboration with international governments, academics and other “interested parties” (including the World Summit on Information Society), in order to “set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies”. “New products should be tested to ensure that products are fit-for-purpose and do not constitute dangers to the users, or to society,” it suggests. “The Code of Ethics should be the backbone of tech companies’ work, and should be continually referred to when developing new technologies and algorithms. If companies fail to adhere to their own Code of Ethics, the UK Government should introduce regulation to make such ethical rules compulsory.”
  • the committee also suggests the government avoids using the (charged and confusing) term ‘fake news’ — and instead puts forward an agreed definition of the words ‘misinformation’ and ‘disinformation’. It should also support research into the methods by which misinformation and disinformation are created and spread across the internet, including support for fact-checking. “We recommend that the government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites,” it writes
  • a suggestion that tech companies should be subject to security and algorithmic auditing — with the committee writing: “Just as the finances of companies are audited and scrutinised, the same type of auditing and scrutinising should be carried out on the non-financial aspects of technology companies, including their security mechanisms and algorithms, to ensure they are operating responsibly. The Government should provide the appropriate body with the power to audit these companies, including algorithmic auditing, and we reiterate the point that the ICO’s powers should be substantially strengthened in these respects”. The committee also floats the idea that the Competition and Markets Authority considers conducting an audit of the operation of the advertising market on social media (given the risk of fake accounts leading to ad fraud)
  • a requirement for tech companies to make full disclosure of targeting used as part of advert transparency. The committee says tech companies must also address the issue of shell corporations and “other professional attempts to hide identity in advert purchasing.

How the government will respond to the committee’s laundry list of recommendations for cleaning up online political advertising remains to be seen, although the issue of Kremlin-backed disinformation campaigns was at least raised publicly by the prime minister last year. Although Theresa May has been rather quieter on revelations about EU referendum-related data misuse and election law breaches.

While the committee uses the term “tech companies” throughout its report to refer to multiple companies, Facebook specifically comes in for some excoriating criticism, with the committee accusing the company of misleading by omission and actively seeking to impede the progress of the inquiry.

It also reiterates its call — for something like the fifth time at this point — for founder Mark Zuckerberg to give evidence. Facebook has provided several witnesses to the committee, including its CTO, but Zuckerberg has declined its requests he appear, even via video link. (And even though he did find time for a couple of hours in front of the EU parliament back in May.)

The committee writes:

We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry, unless it had been expressly asked for. It provided witnesses who have been unwilling or unable to give full answers to the Committee’s questions. This is the reason why the Committee has continued to press for Mark Zuckerberg to appear as a witness as, by his own admission, he is the person who decides what happens at Facebook.

Tech companies are not passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. This manipulation of the sites by tech companies must be made more transparent. Facebook has all of the information. Those outside of the company have none of it, unless Facebook chooses to release it. Facebook was reluctant to share information with the Committee, which does not bode well for future transparency. We ask, once more, for Mr Zuckerberg to come to the Committee to answer the many outstanding questions to which Facebook has not responded adequately, to date.

The committee suggests that the UK’s Defamation Act 2013 means Facebook and other social media companies have a duty to publish and to follow transparent rules — arguing that the Act has provisions which state that “if a user is defamed on social media, and the offending individual cannot be identified, the liability rests with the platform”.

“We urge the government to examine the effectiveness of these provisions, and to monitor tech companies to ensure they are complying with court orders in the UK and to provide details of the source of disputed content– including advertisements — to ensure that they are operating in accordance with the law, or any future industry Codes of Ethics or Conduct. Tech companies also have a responsibility to ensure full disclosure of the source of any political advertising they carry,” it adds.

The committee is especially damning of Facebook’s actions in Burma (as indeed many others have also been), condemning the company’s failure to prevent its platform from being used to spread hate and fuel violence against the Rohingya ethnic minority — and citing the UN’s similarly damning assessment.

“Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed. Time and again we heard from Facebook about mistakes being made and then (sometimes) rectified, rather than designing the product ethically from the beginning of the process. Facebook has a ‘Code of Conduct’, which highlights the principles by which Facebook staff carry out their work, and states that employees are expected to “act lawfully, honestly, ethically, and in the best interests of the company while performing duties on behalf of Facebook”. Facebook has fallen well below this standard in Burma,” it argues.

The committee also directly blames Facebook’s actions for undermining the UK’s international aid efforts in the country — writing:

The United Nations has named Facebook as being responsible for inciting hatred against the Rohingya Muslim minority in Burma, through its ‘Free Basics’ service. It provides people free mobile phone access without data charges, but is also responsible for the spread disinformation and propaganda. The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority.

The hate speech against the Rohingya—built up on Facebook, much of which is disseminated through fake accounts—and subsequent ethnic cleansing, has potentially resulted in the success of DFID’s [the UK Department for International Development] aid programmes being greatly reduced, based on the qualifications they set for success. The activity of Facebook undermines international aid to Burma, including the UK Government’s work. Facebook is releasing a product that is dangerous to consumers and deeply unethical. We urge the Government to demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity. This is a further example of Facebook failing to take responsibility for the misuse of its platform.

We reached out to Facebook for a response to the committee’s report, and in an email statement — attributed to Richard Allan, VP policy — the company told us:

The Committee has raised some important issues and we were pleased to be able to contribute to their work.

We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent. We provide more information on the Pages behind any ad and you can now see all the ads any Facebook Page is running, even if they are not targeted at you. We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.

We’re also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone needed to report this to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly.

The statement makes no mention of Burma. Nor indeed of the committee’s suggestion that social media firms should be taxed to pay for defending democracy and civil society against the damaging excesses of their tools.

On Thursday, rolling out its latest ads transparency features, Facebook announced that users could now see the ads a Page is running across Facebook, Instagram, Messenger and its partner network “even if those ads aren’t shown to you”.

To do so, users have to log into Facebook, visit any Page and select “Info and Ads”. “You’ll see ad creative and copy, and you can flag anything suspicious by clicking on ‘Report Ad’,” it added.

It also flagged a ‘more Page information’ feature that users can use to get more details about a Page such as recent name changes and the date it was created.

“The vast majority of ads on Facebook are run by legitimate organizations — whether it’s a small business looking for new customers, an advocacy group raising money for their cause, or a politician running for office. But we’ve seen that bad actors can misuse our products, too,” Facebook wrote, adding that the features being announced “are just the start” of its efforts “to improve” and “root out abuse”.

Brexit drama

The committee’s interim report was pushed out at the weekend ahead of the original embargo as a result of yet more Brexiteer-induced drama — after the campaign director of the UK’s official Brexit supporting ‘Vote Leave’ campaign, Dominic Cummings, deliberately broke the embargo by publishing the report on his blog in order to spin his own response before the report had been widely covered by the media.

Last week the Electoral Commission published its own report following a multi-month investigation into Brexit campaign spending. The oversight body concluded that Vote Leave broke UK electoral law by massively overspending via a joint working arrangement with another Brexit supporting campaign (BeLeave) — an arrangement via which an additional almost half a million pound’s worth of targeted Facebook ads were co-ordinated by Vote Leave in the final days of the campaign when it had already reached its spending limit (Facebook finally released some of the 2016 Brexit campaign ads that had been microtargeted at UK voters via its platform to the committee, which published these ads last week. Many of Vote Leave’s (up to that point ‘dark’) adverts show the official Brexit campaign generating fake news of its own with ads that, for example, claim Turkey is on the cusp of joining the EU and flooding the UK with millions of migrants; or spreading the widely debunked claim that the UK would be able to spend £350M more per week on the NHS if it left the EU.

In general, dog whistle racism appears to have been Vote Leave’s preferred ‘persuasion’ tactic of microtargeted ad choice — and thanks to Facebook’s ad platform, no one other than each ad’s chosen recipients would have been likely to see the messages.

Cummings also comes in for a major roasting in the committee’s report after his failure to appear before it to answer questions, despite multiple summons (including an unprecedented House of Commons motion ordering him to appear — which he nonetheless also ignored).

“Mr Cummings’ contemptuous behaviour is unprecedented in the history of this Committee’s inquiries and underlines concerns about the difficulties of enforcing co-operation with Parliamentary scrutiny in the modern age,” it writes, adding: “We will return to this issue in our Report in the autumn, and believe it to be an urgent matter for consideration by the Privileges Committee and by Parliament as a whole.”

On his blog, Cummings claims the committee offered him dates they knew he couldn’t do; slams its investigation as ‘fake news’; moans copiously that the committee is made up of Remain supporting MPs; and argues that the investigation should be under oath — as his major defense seems to be that key campaign whistleblowers are lying (despite ex-Cambridge Analytica employee Chris Wylie and ex-BeLeave treasurer Shahmir Sanni having provided copious amounts of documentary evidence to back up their claims; evidence which both the Electoral Commission and the UK’s data watchdog, the ICO, have found convincing enough to announce some of the largest fines they can issue — in the latter case, the ICO announced its intention to fine Facebook the maximum penalty possible (under the prior UK data protection regime) for failing to protect users’ information. (The data watchdog is continuing to investigate multiple aspects of what is a hugely complex (technically and politically) online ad ops story, and earlier this month commissioner Elizabeth Denham called for an ‘ethical pause’ on the use of online ad platforms for microtargeting voters with political messages, arguing — like the DCMS committee — that there are very real and very stark risks for democratic processes).

There’s much, much more self-piteous whining on Cummings blog for anyone who wants to make themselves queasy reading. But bear in mind the Electoral Commission’s withering criticism of the Vote Leave campaign specifically — for not so much failure to co-operate with its investigation but intentional obstruction.

Zuckerberg again snubs UK parliament over call to testify

Facebook has once again eschewed a direct request from the UK parliament for its CEO, Mark Zuckerberg, to testify to a committee investigating online disinformation — without rustling up so much as a fig-leaf-sized excuse to explain why the founder of one of the world’s most used technology platforms can’t squeeze a video call into his busy schedule and spare UK politicians’ blushes.

Which tells you pretty much all you need to know about where the balance of power lies in the global game of (essentially unregulated) U.S. tech platforms giants vs (essentially powerless) foreign political jurisdictions.

At the end of an 18-page letter sent to the DCMS committee yesterday — in which Facebook’s UK head of public policy, Rebecca Stimson, provides a point-by-point response to the almost 40 questions the committee said had not been adequately addressed by CTO Mike Schroepfer in a prior hearing last month — Facebook professes itself disappointed that the CTO’s grilling was not deemed sufficient by the committee.

“While Mark Zuckerberg has no plans to meet with the Committee or travel to the UK at the present time, we fully recognize the seriousness of these issues and remain committed to providing any additional information required for their enquiry into fake news,” she adds.

So, in other words, Facebook has served up another big fat ‘no’ to the renewed request for Zuckerberg to testify — after also denying a request for him to appear before it in March, when it instead sent Schroepfer to claim to be unable to answer MPs’ questions.

At the start of this month committee chair Damian Collins wrote to Facebook saying he hoped Zuckerberg would voluntarily agree to answer questions. But the MP also took the unprecedented step of warning that if the Facebook founder did not do so the committee would issue a formal summons for him to appear the next time Zuckerberg steps foot in the UK.

Hence, presumably, that addendum line in Stimson’s letter — saying the Facebook CEO has no plans to travel to the UK “at the present time”.

The committee of course has zero powers to comply testimony from a non-UK national who is resident outside the UK — even though the platform he controls does plenty of business within the UK.

Last month Schroepfer faced five hours of close and at times angry questions from the committee, with members accusing his employer of lacking integrity and displaying a pattern of intentionally deceptive behavior.

The committee has been specifically asking Facebook to provide it with information related to the UK’s 2016 EU referendum for months — and complaining the company has narrowly interpreted its requests to sidestep a thorough investigation.

More recently research carried out by the Tow Center unearthed Russian-bought UK targeted immigration ads relevant to the Brexit referendum among a cache Facebook had provided to Congress — which the company had not disclosed to the UK committee.

At the end of the CTO’s evidence session last month the committee expressed immediate dissatisfaction — claiming there were almost 40 outstanding questions the CTO had failed to answer, and calling again for Zuckerberg to testify.

It possibly overplayed its hand slightly, though, giving Facebook the chance to serve up a detailed (if not entirely comprehensive) point-by-point reply now — and use that to sidestep the latest request for its CEO to testify.

Still, Collins expressed fresh dissatisfaction today, saying Facebook’s answers “do not fully answer each point with sufficient detail or data evidence”, and adding the committee would be writing to the company in the coming days to ask it to address “significant gaps” in its answers. So this game of political question and self-serving answer is set to continue.

In a statement, Collins also criticized Facebook’s response at length, writing:

It is disappointing that a company with the resources of Facebook chooses not to provide a sufficient level of detail and transparency on various points including on Cambridge Analytica, dark ads, Facebook Connect, the amount spent by Russia on UK ads on the platform, data collection across the web, budgets for investigations, and that shows general discrepancies between Schroepfer and Zuckerberg’s respective testimonies. Given that these were follow up questions to questions Mr Schroepfer previously failed to answer, we expected both detail and data, and in a number of cases got excuses.

If Mark Zuckerberg truly recognises the ‘seriousness’ of these issues as they say they do, we would expect that he would want to appear in front of the Committee and answer questions that are of concern not only to Parliament, but Facebook’s tens of millions of users in this country. Although Facebook says Mr Zuckerberg has no plans to travel to the UK, we would also be open to taking his evidence by video link, if that would be the only way to do this during the period of our inquiry.

For too long these companies have gone unchallenged in their business practices, and only under public pressure from this Committee and others have they begun to fully cooperate with our requests. We plan to write to Facebook in the coming days with further follow up questions.

In terms of the answers Facebook provides to the committee in its letter (plus some supporting documents related to the Cambridge Analytica data misuse scandal) there’s certainly plenty of padding on show. And deploying self-serving PR to fuzz the signal is a strategy Facebook has mastered in recent more challenging political times (just look at its ‘Hard Questions’ series to see this tactic at work).

At times Facebook’s response to political attacks certainly looks like an attempt to drown out critical points by deploying self-serving but selective data points — so, for instance, it talks at length in the letter about the work it’s doing in Myanmar, where its platform has been accused by the UN of accelerating ethnic violence as a result of systematic content moderation failures, but declines to state how many fake accounts it’s identified and removed in the market; nor will it disclose how much revenue it generates from the market.

Asked by the committee what the average time to respond to content flagged for review in the region, Facebook also responds in the letter with the vaguest of generalized global data points — saying: “The vast majority of the content reported to us is reviewed within 24 hours.” Nor does it specify if that global average refers to human review — or just an AI parsing the content.

Another of the committee’s questions is: ‘Who was the person at Facebook responsible for the decision not to tell users affected in 2015 by the Cambridge Analytica data misuse scandal?’ On this Facebook provides three full paragraphs of response but does not provide a direct answer specifying who decided not to tell users at that point — so either the company is concealing the identity of the person responsible or there simply was no one in charge of that kind of consideration at that time because user privacy was so low a priority for the company that it had no responsibility structures in place to enforce it.

Another question — ‘who at Facebook heads up the investigation into Cambridge Analytica?’ — does get a straight and short response, with Facebook saying its legal team, led by general counsel Colin Stretch, is the lead there.

It also claims that Zuckerberg himself only become aware of the allegations that Cambridge Analytica may not have deleted Facebook user data in March 2018 following press reports.

Asked what data it holds on dark ads, Facebook provides some information but it’s also being a bit vague here too — saying: “In general, Facebook maintains for paid advertisers data such as name, address and banking details”, and: “We also maintain information about advertiser’s accounts on the Facebook platform and information about their ad campaigns (most advertising content, run dates, spend, etc).”

It does also confirms it can retain the aforementioned data even if a page has been deleted — responding to another of the committee’s questions about how the company would be able to audit advertisers who set up to target political ads during a campaign and immediately deleted their presence once the election was over.

Though, given it’s said it only generally retains data, we must assume there are instances where it might not retain data and the purveyors of dark ads are essentially untraceable via its platform — unless it puts in place a more robust and comprehensive advertiser audit framework.

The committee also asked Facebook’s CTO whether it retains money from fraudulent ads running on its platform, such as the ads at the center of a defamation lawsuit by consumer finance personality Martin Lewis. On this Facebook says it does not “generally” return money to an advertiser when it discovers a policy violation — claiming this “would seem perverse” given the attempt to deceive users. Instead it says it makes “investments in areas to improve security on Facebook and beyond”.

Asked by the committee for copies of the Brexit ads that a Cambridge Analytica linked data company, AIQ, ran on its platform, Facebook says it’s in the process of compiling the content and notifying the advertisers that the committee wants to see the content.

Though it does break out AIQ ad spending related to different vote leave campaigns, and says the individual campaigns would have had to grant the Canadian company admin access to their pages in order for AIQ to run ads on their behalf.

The full letter containing all Facebook’s responses can be read here.

Facebook’s Free Basics program ended quietly in Myanmar last year

As recently as last week, Facebook was touting the growth of its Internet.org app Free Basics, but the program isn’t working out everywhere. As the Outline originally reported and TechCrunch confirmed, the Free Basics program has ended in Myanmar, perhaps Facebook’s most controversial non-Western market at the moment.

Its mission statement pledging to “bring more people online and help improve their lives” is innocuous enough, but Facebook’s Internet.org strategy is extremely aggressive, optimized for explosive user growth in markets that the company has yet to penetrate. Free Basics, an initiative under Internet.org, is an app that offers users in developing markets a “free” Facebook-curated version of the broader internet.

The app provides users willing to sign up for Facebook with internet access that doesn’t count against their mobile plan — stuff like the weather and local news — but keeps them within a specially tailored version of the platform’s walled garden. The result in some countries with previously low connectivity rates was that the social network became synonymous with the internet itself — and as we’ve seen, that can lead to a whole host of very real problems.

While the Outline reports that Free Basics has ended in “half a dozen nations and territories,” including Bolivia, Papua New Guinea, Trinidad and Tobago, Republic of Congo, Anguilla, Saint Lucia and El Salvador, Facebook told TechCrunch that only two international mobile providers have ended the program, leaving room for interpretation about how other countries ended their involvement and why.

As a Facebook spokeswoman told TechCrunch, Facebook is still moving forward with the program:

We’re encouraged by the adoption of Free Basics. It is now available in more than 50 countries with 81 mobile operator partners around the world. Today, more than 1,500 services are available on Free Basics worldwide, provided to people in partnership with mobile operators.

Free Basics remains live with the vast majority of participating operators who have opted to continue offering the service. We remain committed to bringing more people around the world online by breaking down barriers to connectivity.

Facebook confirmed to TechCrunch that Free Basics did indeed end in Myanmar in September 2017, a little over a year since its June 2016 launch in the country. The company clarified that Myanmar’s state-owned telecom Myanma Posts and Telecommunications (MPT) cooperated with the Myanmar government to shut down access to all free services, including Free Basics in September of last year. The move was part of a broader regulatory effort by the Myanmar government.

In a press release, MPT described how the regulation shaped policy for the country’s three major telecoms:

… As responsible operators, [MPT, Ooredoo and Telenor] abide by sound price competition practices – hallmarks of a healthy marketplace and to adhere to industry best practices and ethical business guidelines.

This [includes] compliance with the authority imposed floor pricing as set out in the Post and Telecommunications Department’s Pricing and Tariff Regulatory Framework of 28 June 2017, including refraining from behavior such as free distribution or sales of SIM cards and supplying services and handsets at below the cost including delivery.

In Myanmar, Facebook’s Free Basics offering ran afoul of the same price floor regulations that restricted the distribution of free SIM cards.

Elsewhere, Facebook’s Free Basics program is winding down for other reasons. Last fall, the telecom Digicel ended access to Free Basics in El Salvador and some of its Caribbean markets. Digicel confirmed to TechCrunch that it stopped offering Free Basics due to commercial reasons on its end and that the decision was not a result of any action by Facebook or Internet.org.

As the Free Basics program is part of a partnership between Facebook and local mobile providers, the latter can terminate access to the app at will. Still, it’s not clear if that was the case in all the countries in which the app is no longer available.

In 2016, India regulated Facebook’s free internet deal out of existence, effectively blocking Facebook’s access to its most sought-after new market in the process. Since then, vocal critics have called Facebook’s Internet.org efforts everything from digital colonialism to a spark in the tinderbox for countries dealing with targeted violence against religious minorities.

Still, according to Facebook, even as some markets dry up, the program is quietly expanding. In late 2017 Facebook added Sudan and Cote d’Ivoire to its Free Basics roster. This year, Facebook launched the initiative in Cameroon and added additional mobile partners in Columbia and Peru.

Myanmar’s access to Free Basics is now restricted, but Facebook indicated that its efforts to connect the country — and its 54 million newly minted or yet to be converted Facebook users — are not over.

Facebook expands downvote tests on comments

Mark Twain had it right: There’s no such thing as a new idea. To wit: Facebook is currently testing arrows to let users ‘up’ vote or ‘down’ vote individual comments in some of its international markets. Digg eat your heart out. Reddit roll over.

This particular trial of upvoting/downvoting buttons is limited to New Zealand and Australia for now, according to Facebook (via The Guardian).

The latest test is a bit different to a downvote test Facebook ran in the US back in February — when it just offered a downvote option. (And if clicked it hid the comment and gave users additional reporting options such as: “Offensive”, “Misleading”, and “Off Topic”.)

The latest international test looks a bit less negative — with an overall score being recorded next to the arrows which could at least reward users with some positive feels if their comment gets lots of upvotes. Negative scores could do the opposite though.

It’s not certain whether the company will commit to rolling out the feature in this form — a spokesman told us this is an early test, with no decision made on whether to roll it out for Facebook’s 2.2BN+ user base — but its various tests in this area suggest it’s interested in having another signal for rating or ranking comments.

In a statement attributed to a spokesperson it told us: “People have told us they would like to see better public discussions on Facebook, and want spaces where people with different opinions can have more constructive dialogue.  To that end, we’re running a small test in New Zealand which allows people to upvote or downvote comments on public Page posts. Our hope is that this feature will make it easier for us to create such spaces, by ranking the comments that readers believe deserve to rank highest, rather than the comments that get the strongest emotional reaction.”

The test looks to have been going on for a couple of weeks at least at this point — a reader emailed TC on April 14 with screengrabs of the trial on comments for New Zealand Commonwealth Games content…

 

Facebook emphasized the feature is not an official dislike button. If rolled out a spokesman said it would not replace the suite of emoji reactions the platform already offers so people can record how they feel about an individual Facebook post (and reactions already include thumbs up/thumbs down emoji options).

Rather its focus is on giving users more granular controls to rate comments on Facebook posts.

The spokesman told us the feature test is intended to see if users find the upvote/downvote buttons a productive option to give feedback about how informative or constructive a comment is.

Facebook users with access to the trial who hover over a comment will see a pop-up box that explains how to use the feature, according to the Guardian — with usage of the up arrow encouraged via text telling them to “support better comments” and “press the comment up if you think the comment is helpful or insightful”; while the down arrow is explained as a way to “stop bad comments” and the further instruction to: “Press the down button if a comment has bad intentions or is disrespectful. It’s still ok to disagree in a respectful way.”

It’s likely Facebook is toying with using comment rating buttons to encourage a more broad crowdsourcing effort to help it with the myriad, complex content moderation challenges associated with its platform.

Responding quickly enough to hate speech remains a particular problem for it — and a hugely serious one in certain regions and markets.

In Myanmar, for example, the UN has accused the platform of accelerating ethnic violence by failing to prevent the spread of violent hate speech. Local groups have also blasted Facebook for failing to be responsive enough to the problem.

In a statement responding to a critical letter sent last month by Myanmar civil society organizations, Facebook conceded: “We should have been faster and are working hard to improve our technology and tools to detect and prevent abusive, hateful or false content.”

And while the company has said it will double the headcount of staff who work on safety and security issues, to 20,000 by the end of this year, that’s still a tiny drop in the ocean of content shared daily over its social platforms — so it’s likely looking at how it can co-opt more of the 2.2BN+ humans who use its platform to help it with the hard problems of sifting good comments from bad: A nuanced task which can baffle AI — so, tl;dr, the more human signals you can get the better.

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.

Twitter doesn’t care that someone is building a bot army in Southeast Asia

Facebook’s lack of attention to how third parties are using its service to reach users ended up with CEO Mark Zuckerberg taking questions from Congressional committees. With that in mind, you’d think that others in the social media space might be more attentive than usual to potentially malicious actors on their platforms.

Twitter, however, is turning the other way and insisting all is normal in Southeast Asia, despite the emergence of thousands of bot-like accounts that have followed prominent users in the region en masse over the past month.

Scores of reporters and Twitter users with large followers — yours truly included — have noticed swarms of accounts with generic names, no profile photo, no bio and no tweets have followed them over the past month.

These accounts might be evidence of a new ‘bot farm’ — the creation of large numbers of accounts for sale or usage on-demand which Twitter has cracked down on — or the groundwork for more nefarious activities, it’s too early to tell.

In what appears to be the first regional Twitter bot campaign, a flood of suspicious new followers has been reported by users across Southeast Asia and beyond, including Thailand, Myanmar Cambodia, Hong Kong, China, Taiwan, Sri Lanka among other places.

While it is true that the new accounts have done nothing yet, the fact that a large number of newly-created accounts have popped up out of nowhere with the aim of following the region’s most influential voices should be enough to concern Twitter. Especially since this is Southeast Asia, a region where Facebook is beset with controversies — from its role inciting ethnic hatred in Myanmar, to allegedly assisting censors in Vietnam, witnessing users jailed for violating lese majeste in Thailand, and aiding the election of controversial Philippines leader Duterte.

Then there are governments themselves. Vietnam has pledged to build a cyber army to combat “wrongful views,” while other regimes in Southeast Asia have clamped down on social media users.

Despite that, Twitter isn’t commenting.

The U.S. company issued a no comment to TechCrunch when we asked for further information about this rush of new accounts, and what action Twitter will take.

A source close to the company suggested that the sudden accumulation of new followers is “a pretty standard sign-up, or onboarding, issue” that is down to new accounts selecting to follow the suggested accounts that Twitter proposes during the new account creation process.

Twitter is more than 10 years old, and since this is the first example of this happening in Southeast Asia that explanation already seems inadequate at face value. More generally, the dismissive approach seems particularly naive. Twitter should be looking into the issue more closely, even if for now the apparent bot army isn’t being put to use yet.

Facebook is considered to be the internet by many in Southeast Asia, and the social network is considerably more popular than Twitter in the region, but there remains a cause for concern here.

“If we’ve learned anything from the Facebook scandal, it’s that what can at first seem innocuous can be leveraged to quite insidious and invasive effect down the line,” Francis Wade, who recently published a book on violence in Myanmar, told the Financial Times this week. “That makes Twitter’s casual dismissal of concerns around this all the more unsettling.”

Facebook is again criticized for failing to prevent religious conflict in Myanmar

Today marks the start of Facebook CEO Mark Zuckerberg’s much-anticipated trip to Washington as he attends a hearing with the Senate, before moving on to a Congressional hearing tomorrow.

Away from the U.S. political capital, Zuckerberg is engaged in serious discussions about Myanmar with a group of six civil society organizations in the country who took umbrage at his claim that Facebook’s systems had prevented messages aimed at inciting violence between Buddhists and Muslims last September.

Following an open letter to Facebook on Friday that claimed the social network had relied on local sources and remains ill-equipped to handle hate speech, Zuckerberg himself stepped in to personally respond.

“Thank you for writing it and I apologize for not being sufficiently clear about the important role that your organizations play in helping us understand and respond to Myanmar-related issues, including the September incident you referred to,” Zuckerberg wrote.

“In making my remarks, my intention was to highlight how we’re building artificial intelligence to help us better identify abusive, hateful or false content even before it is flagged by our community,” he added.

Zuckerberg also claimed Facebook is working to implement new features that include the option to report inappropriate content inside Messenger, and adding more Burmese language reviewers — two suggestions that the Myanmar-based group had raised.

The group has, however, fired back again to criticize Zuckerberg’s response which it said is “nowhere near enough to ensure that Myanmar users are provided with the same standards of care as users in the U.S. or Europe.”

Young men browse their Facebook wall on their smartphones as they sit in a street in Yangon on August 20, 2015. Facebook remains the dominant social network for US Internet users, while Twitter has failed to keep apace with rivals like Instagram and Pinterest, a study showed. AFP PHOTO / Nicolas ASFOURI (Photo credit should read NICOLAS ASFOURI/AFP/Getty Images)

In particular, the six companies are asking Facebook and Zuckerberg to give information around its efforts, including the number of abuse reports it has received, how many have been removed, how quickly it has been done, and its progress on banning accounts.

In addition, the group asked for clarity on the number of Burmese content reviewers on staff, the exact mechanisms that are in place for detecting hate speech, and an update on what action Facebook has taken following its last meeting with the group in December.

“When things go wrong in Myanmar, the consequences can be really serious — potentially disastrous,” it added.

The Cambridge Analytica story has become mainstream news in the U.S. and other parts of the world, yet less is known of Facebook’s role in spreading religious hatred in Myanmar, where the government stands accused of ethnic cleansing following its treatment of the minority Muslim Rohingya population.

A recent UN Fact-Finding Mission concluded that social media has played a “determining role” in the crisis, which Facebook the chief actor.

“We know that the ultranationalist Buddhists have their own [Facebook pages] and really [are] inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities. I’m afraid that Facebook has now turned into a beast, [instead of] what it was originally intended to be used [for],” the UN’s Yanghee Lee said to media.

Close to 30 million of Myanmar’s 50 million population is said to use the social network, making it a hugely effective way to reach large audiences.

“There’s this notion to many people [in Myanmar] that Facebook is the internet,” Jes Petersen, CEO of Phandeeyar — one of the companies involved in the correspondence with Zuckerberg — told TechCrunch in an interview last week.

Despite that huge popularity and high levels of abuse that Facebook itself has acknowledged, the social network does not have an office in Myanmar. In fact, its Burmese language reviewers are said to be stationed in Ireland while its policy team is located in Australia.

That doesn’t seem like the right combination, but it is also unclear whether Facebook is prepared to make changes to focus on user safety in Myanmar. The company declined to say whether it had plans to open an office on the ground when we asked last week.

Here’s Zuckerberg’s letter in full:

Dear Htaike Htaike, Jes, Victoire, Phyu Phyu and Thant,

I wanted to personally respond to your open letter. Thank you for writing it and I apologize for not being sufficiently clear about the important role that your organizations play in helping us understand and respond to Myanmar-related issues, including the September incident you referred to.

In making my remarks, my intention was to highlight how we’re building artificial intelligence to help us better identify abusive, hateful or false content even before it is flagged by our community.

These improvements in technology and tools are the kinds of solutions that your organizations have called on us to implement and we are committed to doing even more. For example, we are rolling out improvements to our reporting mechanism in Messenger to make it easier to find and simpler for people to report conversations.

In addition to improving our technology and tools, we have added dozens more Burmese language reviewers to handle reports from users across all our services. We have also increased the number of people across the company on Myanmar-related issues and we now we have a special product team working to better understand the specific local challenges and build the right tools to help keep people there safe.

There are several other improvements we have made or are making, and I have directed my teams to ensure we are doing all we can to get your feedback and keep you informed.

We are grateful for your support as we map out our ongoing work in Myanmar, and we are committed to working with you to find more ways to be responsive to these important issues.

Mark

And the group’s reply:

Dear Mark,

Thank you for responding to our letter from your personal email account. It means a lot.

We also appreciate your reiteration of the steps Facebook has taken and intends to take to improve your performance in Myanmar.

This doesn’t change our core belief that your proposed improvements are nowhere near enough to ensure that Myanmar users are provided with the same standards of care as users in the US or Europe.

When things go wrong in Myanmar, the consequences can be really serious – potentially disastrous. You have yourself publicly acknowledged the risk of the platform being abused towards real harm.

Like many discussions we have had with your policy team previously, your email focuses on inputs. We care about performance, progress and positive outcomes.

In the spirit of transparency, we would greatly appreciate if you could provide us with the following indicators, starting with the month of March 2018:

  • How many reports of abuse have you received?
  • What % of reported abuses did your team ultimately remove due to violations of the community standards?
  • How many accounts were behind flagging the reports received?
  • What was the average time it took for your review team to provide a final response to users of the reports they have raised?
  • What % of the reports received took more than 48 hours to receive a review?
  • Do you have a target for review times? Data from our own monitoring suggests that you might have an internal standard for review – with most reported posts being reviewed shortly after the 48 hrs mark. Is this accurate?
  • How many fake accounts did you identify and removed?
  • How many accounts did you subject to a temporary ban? How many did you ban from the platform?

Improved performance comes with investments and we would also like to ask for more clarifications around these. Most importantly, we would like to know:

  • How many Myanmar speaking reviewers did you have, in total, as of March 2018? How many do you expect to have by the end of the year? We are specifically interested in reviewers working on the Facebook service and looking for full-time equivalents figure.
  • What mechanisms do you have in place for stopping repeat offenders in Myanmar? We know for a fact that fake accounts remain a key issue and that individuals who were found to violate the community standards on a number of occasions continue to have a presence on the platform.
  • What steps have you taken to date to address the duplicate posts issue we raised in the briefing we provided your team in December 2017?

We’re enclosing our December briefing for your reference, as it further elaborates on the challenges we have been trying to work through with Facebook.

Best,