Tag Archives: fake news

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Twitter bags deep learning talent behind London startup, Fabula AI

Twitter has just announced it has picked up London-based Fabula AI. The deep learning startup has been developing technology to try to identify online disinformation by looking at patterns in how fake stuff vs genuine news spreads online — making it an obvious fit for the rumor-riled social network.

Social media giants remain under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don’t, for example, get a free pass to fiddle with democratic processes.

Twitter says the acquisition of Fabula will help it build out its internal machine learning capabilities — writing that the UK startup’s “world-class team of machine learning researchers” will feed an internal research group it’s building out, led by Sandeep Pandey, its head of ML/AI engineering.

This research group will focus on “a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning” — now with Fabula co-founder and chief scientist, Michael Bronstein, as a leading light within it.

Bronstein is chair in machine learning & pattern recognition at Imperial College, London — a position he will remain while leading graph deep learning research at Twitter.

Fabula’s chief technologist, Federico Monti — another co-founder, who began the collaboration that underpin’s the patented technology with Bronstein while at the University of Lugano, Switzerland — is also joining Twitter.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities. Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service,” said Bronstein in a statement.

“This strategic investment in graph deep learning research, technology and talent will be a key driver as we work to help people feel safe on Twitter and help them see relevant information,” Twitter added. “Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience.”

Terms of the acquisition have not been disclosed.

We covered Fabula’s technology and business plan back in February when it announced its “new class” of machine learning algorithms for detecting what it colloquially badged ‘fake news’.

Its approach to the problem of online disinformation looks at how it spreads on social networks — and therefore who is spreading it — rather than focusing on the content itself, as some other approaches do.

Fabula has patented algorithms that use the emergent field of “Geometric Deep Learning” to detect online disinformation — where the datasets in question are so large and complex that traditional machine learning techniques struggle to find purchase. Which does really sound like a patent designed with big tech in mind.

Fabula likens how ‘fake news’ spreads on social media vs real news as akin to “a very simplified model of how a disease spreads on the network”.

One advantage of the approach is it looks to be language agnostic (at least barring any cultural differences which might also impact how fake news spread).

Back in February the startup told us it was aiming to build an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency, just focused on content not cash.

It’s not clear from Twitter’s blog post whether the core technologies it will be acquiring with Fabula will now stay locked up within its internal research department — or be shared more widely, to help other platforms grappling with online disinformation challenges.

The startup had intended to offer an API for platforms and publishers later this year.

But of course building a platform is a major undertaking. And, in the meanwhile, Twitter — with its pressing need to better understand the stuff its network spreads — came calling.

A source close to the matter told us that Fabula’s founders decided that selling to Twitter instead of pushing for momentum behind a vision of a decentralized, open platform because the exit offered them more opportunity to have “real and deep impact, at scale”.

Though it is also still not certain what Twitter will end up doing with the technology it’s acquiring. And it at least remains possible that Twitter could choose to make it made open across platforms.

“That’ll be for the team to figure out with Twitter down the line,” our source added.

A spokesman for Twitter did not respond directly when we asked about its plans for the patented technology but he told us: “There’s more to come on how we will integrate Fabula’s technology where it makes sense to strengthen our systems and operations in the coming months.  It will likely take us some time to be able to integrate their graph deep learning algorithms into our ML platform. We’re bringing Fabula in for the team, tech and mission, which are all aligned with our top priority: Health.”

Facebook found hosting masses of far right EU disinformation networks

A multi-month hunt for political disinformation spreading on Facebook in Europe suggests there are concerted efforts to use the platform to spread bogus far right propaganda to millions of voters ahead of a key EU vote which kicks off tomorrow.

Following the independent investigation, Facebook has taken down a total of 77 pages and 230 accounts from Germany, UK, France, Italy, Spain and Poland — which had been followed by an estimated 32 million people and generated 67 million ‘interactions’ (i.e. comments, likes, shares) in the last three months alone.

The bogus mainly far-right disinformation networks were not identified by Facebook — but had been reported to it by campaign group Avaaz — which says the fake pages had more Facebook followers and interactions than all the main EU far right and anti-EU parties combined.

“The results are overwhelming: the disinformation networks upon which Facebook acted had more interactions (13 million) in the past three months than the main party pages of the League, AfD, VOX, Brexit Party, Rassemblement National and PiS combined (9 million),” it writes in a new report.

Although interactions is the figure that best illustrates the impact and reach of these networks, comparing the number of followers of the networks taken down reveals an even clearer image. The Facebook networks takedown had almost three times (5.9 million) the number of followers as AfD, VOX, Brexit Party, Rassemblement National and PiS’s main Facebook pages combined (2 million).”

Avaaz has previously found and announced far right disinformation networks operating in Spain, Italy and Poland — and a spokesman confirmed to us it’s re-reporting some of its findings now (such as the ~30 pages and groups in Spain that had racked up 1.7M followers and 7.4M interactions, which we covered last month) to highlight an overall total for the investigation.

“Our report contains new information for France, United Kingdom and Germany,” the spokesman added.

Examples of politically charged disinformation being spread via Facebook by the bogus networks it found include a fake viral video seen by 10 million people that supposedly shows migrants in Italy destroying a police car (but was actually from a movie; which Avaaz adds that this fake had been “debunked years ago”); a story in Poland claiming that migrant taxi drivers rape European women, including a fake image; and fake news about a child cancer center being closed down by Catalan separatists in Spain.

There’s lots more country-specific detail in its full report.

In all, Avaaz reported more than 500 suspicious pages and groups to Facebook related to the three-month investigation of Facebook disinformation networks in Europe. Though Facebook only took down a subset of the far right muck-spreaders — around 15% of the suspicious pages reported to it.

“The networks were either spreading disinformation or using tactics to amplify their mainly anti-immigration, anti-EU, or racist content, in a way that appears to breach Facebook’s own policies,” Avaaz writes of what it found.

It estimates that content posted by all the suspicious pages it reported had been viewed some 533 million times over the pre-election period. Albeit, there’s no way to know whether or not everything it judged suspicious actually was.

In a statement responding to Avaaz’s findings, Facebook told us:

We thank Avaaz for sharing their research for us to investigate. As we have said, we are focused on protecting the integrity of elections across the European Union and around the world. We have removed a number of fake and duplicate accounts that were violating our authenticity policies, as well as multiple Pages for name change and other violations. We also took action against some additional Pages that repeatedly posted misinformation. We will take further action if we find additional violations.

The company did not respond to our question asking why it failed to unearth this political disinformation itself.

Ahead of the EU parliament vote, which begins tomorrow, Facebook invited a select group of journalists to tour a new Dublin-based election security ‘war room’ — where it talked about a “five pillars of countering disinformation” strategy to prevent cynical attempts to manipulate voters’ views.

But as Avaaz’s investigation shows there’s plenty of political disinformation flying by entirely unchecked.

One major ongoing issue where political disinformation and Facebook’s platform is concerned is that how the company enforces its own rules remains entirely opaque.

We don’t get to see all the detail — so can’t judge and assess all its decisions. Yet Facebook has been known to shut down swathes of accounts deemed fake ahead of elections, while apparently failing entirely to find other fakes (such as in this case).

It’s a situation that does not look compatible with the continued functioning of democracy given Facebook’s massive reach and power to influence.

Nor is the company under an obligation to report every fake account it confirms. Instead, Facebook gets to control the timing and flow of any official announcements it chooses to make about “coordinated inauthentic behaviour” — dropping these self-selected disclosures as and when it sees fit, and making them sound as routine as possible by cloaking them in its standard, dryly worded newspeak.

Back in January, Facebook COO Sheryl Sandberg admitted publicly that the company is blocking more than 1M fake accounts every day. If Facebook was reporting every fake it finds it would therefore need to do so via a real-time dashboard — not sporadic newsroom blog posts that inherently play down the scale of what is clearly embedded into its platform, and may be so massive and ongoing that it’s not really possible to know where Facebook stops and ‘Fakebook’ starts.

The suspicious behaviours that Avaaz attached to the pages and groups it found that appeared to be in breach of Facebook’s stated rules include the use of fake accounts, spamming, misleading page name changes and suspected coordinated inauthentic behavior.

When Avaaz previously reported the Spanish far right networks Facebook subsequently told us it had removed “a number” of pages violating its “authenticity policies”, including one page for name change violations but claimed “we aren’t removing accounts or Pages for coordinated inauthentic behavior”.

So again, it’s worth emphasizing that Facebook gets to define what is and isn’t acceptable on its platform — including creating terms that seek to normalize its own inherently dysfunctional ‘rules’ and their ‘enforcement’.

Such as by creating terms like “coordinated inauthentic behavior”, which sets a threshold of Facebook’s own choosing for what it will and won’t judge political disinformation. It’s inherently self-serving.

Given that Facebook only acted on a small proportion of what Avaaz found and reported overall, we might posit that the company is setting a very high bar for acting against suspicious activity. And that plenty of election fiddling is free flowing under its feeble radar. (When we previously asked Facebook whether it was disputing Avaaz’s finding of coordinated inauthentic behaviour vis-a-vis the far right disinformation networks it reported in Spain the company did not respond to the question.)

Much of the publicity around Facebook’s self-styled “election security” efforts has also focused on how it’s enforcing new disclosure rules around political ads. But again political disinformation masquerading as organic content continues being spread across its platform — where it’s being shown to be racking up millions of interactions with people’s brains and eyeballs.

Plus, as we reported yesterday, research conducted by the Oxford Internet Institute into pre-EU election content sharing on Facebook has found that sources of disinformation-spreading ‘junk news’ generate far greater engagement on its platform than professional journalism.

So while Facebook’s platform is also clearly full of real people sharing actual news and views, the fake BS which Avaaz’s findings imply is also flooding the platform, gets spread around more, on a per unit basis. And it’s democracy that suffers — because vote manipulators are able to pass off manipulative propaganda and hate speech as bona fide news and views as a consequence of Facebook publishing the fake stuff alongside genuine opinions and professional journalism.

It does not have algorithms that can perfectly distinguish one from the other, and has suggested it never will.

The bottom line is that even if Facebook dedicates far more resource (human and AI) to rooting out ‘election interference’ the wider problem is that a commercial entity which benefits from engagement on an ad-funded platform is also the referee setting the rules.

Indeed, the whole loud Facebook publicity effort around “election security” looks like a cynical attempt to distract the rest of us from how broken its rules are. Or, in other words, a platform that accelerates propaganda is also seeking to manipulate and skew our views.

Facebook still a great place to amplify pre-election junk news, EU study finds

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

Twitter launches new search features to stop the spread of misinformation about vaccines

As measles outbreaks in the United States and other countries continue to get worse, Twitter is introducing new search tools meant to help users find credible resources about vaccines. It will also stop auto-suggesting search terms that would lead users to misinformation about vaccines.

In a blog post, Twitter vice president of trust and safety Del Harvey wrote “at Twitter, we understand the importance of vaccines in preventing illness and disease and recognize the role that Twitter plays in disseminating important public health information. We think it’s important to help people find reliable information that enhances their health and well-being.”

When users search for keywords related to vaccines, they will see a prompt that directs them to resources from Twitter’s information partners. In the U.S., this is vaccines.gov, a website by the Department of Health and Human Services. A pinned tweet from one of Twitter’s partners will also appear.

One of Twitter's new tools to stop the spread of vaccine misinformation

One of Twitter’s new tools to stop the spread of vaccine misinformation

In addition to the U.S., the vaccine information tools will also appear on Twitter’s iOS and Android apps and its mobile site in Canada, the United Kingdom, Brazil, Korea, Japan, Indonesia, Singapore and Spanish-speaking Latin American countries.

Harvey wrote that Twitter’s vaccine information tools are similar to ones it launched for suicide and self-harm prevention last year. The company plans to launch similar features for other public health issues over the coming months, she added.

Earlier this week, the Centers for Disease Control and Prevention said measles cases in the U.S. had increased to 839. Cases have been reported in 23 states this year, with the majority — or almost 700 — in New York.

Social media platforms have been criticized for not doing more to prevent the spread of misinformation about vaccines and, as measles cases began to rise, started taking measures. For example, YouTube announced earlier this year that it is demonetizing all anti-vaccine videos, while Facebook began downranking anti-vaccine content on its News Feed and hiding it on Instagram.

When it comes to elections, Facebook moves slow, may still break things

This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.

Twitter to offer report option for misleading election tweets

Twitter is adding a dedicated report option that enables users to tell it about misleading tweets related to voting — starting with elections taking place in India and the European Union .

From tomorrow users in India can report tweets they believe are trying to mislead voters — such as disinformation related to the date or location of polling stations; or fake claims about identity requirements for being able to vote — by tapping on the arrow menu of the suspicious tweet and selecting the ‘report tweet’ option and then choosing: ‘It’s misleading about voting’.

Twitter says the tool will go live for the Indian Lok Sabha elections from tomorrow, and will launch in all European Union member states on April 29 — ahead of elections for the EU parliament next month.

The ‘misleading about voting’ option will persist in the list of available choices for reporting tweets for seven days after each election ends, Twitter said in a blog post announcing the feature.

It also said it intends to the vote-focused feature to be rolled out to “other elections globally throughout the rest of the year”, without providing further detail on which elections and markets it will prioritize for getting the tool.

“Our teams have been trained and we recently enhanced our appeals process in the event that we make the wrong call,” Twitter added.

In recent months the European Commission has been ramping up pressure on tech platforms to scrub disinformation ahead of elections to the EU parliament — issuing monthly reports on progress, or, well, the lack of it.

This follows a Commission initiative last year which saw major tech and ad platforms — including Facebook, Google and Twitter — sign up to a voluntary Code of Practice on disinformation, committing themselves to take some non-prescribed actions to disrupt the ad revenues of disinformation agents and make political ads more transparent on their platforms.

Another strand of the Code looks to have directly contributed to the development of Twitter’s new ‘misleading about voting’ report option — with signatories committing to:

  • Empower consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;

In the latest progress report on the Code, which was published by the Commission yesterday but covers steps taken by the platforms in March 2019, it noted some progress made — but said it’s still not enough.

“Further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation,” EC commissioners warned in a joint statement.

In the case of Twitter the company was commended for having made political ad libraries publicly accessible but criticized (along with Google) for not doing more to improve transparency around issue-based advertising.

“It is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections,” the Commission said. 

It also reported that Twitter had provided figures on actions undertaken against spam and fake accounts but had failed to explain how these actions relate to activity in the EU.

“Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area,” it also noted.

The EC says it will assess the Code’s initial 12-month period by the end of 2019 — and take a view on whether it needs to step in and propose regulation to control online disinformation. (Something which some individual EU Member States are already doing, albeit with a focus on hate speech and/or online safety.)

Facebook has quietly removed three bogus far right networks in Spain ahead of Sunday’s elections

Facebook has quietly removed three far right networks that were engaged in coordinated inauthentic behavior intended to spread politically divisive content in Spain ahead of a general election in the country which takes place on Sunday.

The networks had a total reach of almost 1.7M followers and had generated close to 7.4M interactions in the past three months alone, according to analysis by the independent group that identified the bogus activity on Facebook’s platform.

The fake far right activity was apparently not picked up by Facebook.

Instead activist not-for-profit Avaaz unearthed the inauthentic content, and presented its findings to the social networking giant earlier this month, on April 12. In a press release issued today the campaigning organization said Facebook has now removed the fakes — apparently vindicating its findings.

“Facebook did a great job in acting fast, but these networks are likely just the tip of the disinformation iceberg — and if Facebook doesn’t scale up, such operations could sink democracy across the continent,” said Christoph Schott, campaign director at Avaaz, in a statement.

“This is how hate goes viral. A bunch of extremists use fake and duplicate accounts to create entire networks to fake public support for their divisive agenda. It’s how voters were misled in the U.S., and it happened again in Spain,” he added.

We reached out to Facebook for comment but at the time of writing the company had not responded to the request or to several questions we also put to it.

Avaaz said the networks it found comprised around thirty pages and groups spreading far right propaganda — including anti-immigrant, anti-LGBT, anti-feminist and anti-Islam content.

Examples of the inauthentic content can be viewed in Avaaz’s executive summary of the report. They include fake data about foreigners committing the majority of rapes in Spain; fake news about Catalonia’s pro independence leader; and various posts targeting leftwing political party Podemos — including an image superimposing the head of its leader onto the body of Hitler performing a nazi salute.

One of the networks — which Avaaz calls Unidad ​Nacional Española (after the most popular page in the network) — was apparently created and co-ordinated by an individual called ​Javier Ramón Capdevila Grau, who had multiple personal Facebook accounts (also) in contravention of Facebook’s community standards. 

This network, which had a reach of more than 1.2M followers, comprised at least 10 pages that Avaaz identified as working in a coordinated fashion to spread “politically divisive content”.

Its report details how word-for-word identical posts were published across multiple Facebook pages and groups in the network just minutes apart, with nothing to indicate they weren’t original postings on each page. 

Here’s an example post it found copy-pasted across the bogus network:

Translated the posted text reads: ‘In Spain, if a criminal enters your house without your permission the only thing you can do is hide, since if you touch a hair on his head or prevent him from being able to rob you you’ll spend more time in prison than him.’

Avaaz found another smaller network targeting leftwing views, called Todos Contra Podemos, which included seven pages and groups with around 114,000 followers — also apparently run by a single individual (in this case using the name Antonio Leal Felix Aguilar) who also operated multiple Facebook profiles

A third network, Lucha por España​, comprised 12 pages and groups with around 378,000 followers.

Avaaz said it was unable to identify the individual/s behind that network. 

While Facebook has not publicized the removals of these particular political disinformation networks, despite its now steady habit of issuing PR when it finds and removes ‘coordinated inauthentic behavior‘ on its platform (though of course there’s no way to be sure it’s disclosing everything it finds), test searches for the main pages identified by Avaaz returned either no results or what appear to be other unrelated Facebook pages using the same name.

Since the 2016 U.S. presidential election was (infamously) targeted by divisive Kremlin propaganda seeded and amplified via social media, Facebook has launched what it markets as “election security” initiatives in a handful of countries around the world — such as searchable ad archives and political ad authentication and/or disclosure requirements.

However these efforts continue to face criticism for being patchy, piecemeal and, even in countries where they have been applied to its platform, weak and trivially easy to workaround.

Its political ads transparency measures do not always apply to issue-based ads (and/or content), for instance, which punches a democracy-denting hole in the self-styled ‘guardrails’ by allowing divisive propaganda to continue to flow.

In Spain Facebook has not even launched a system of political ad transparency, let alone launched systems addressing issue-based political ads — despite the country’s looming general election on April 28; its third in four years. (Since 2015 elections in Spain have yielded heavily fragmented parliaments — making another imminent election not at all unlikely.)

In February, when we asked Facebook whether it would commit to launching ad transparency tools in Spain before the April 28 election, it offered no such commitment — saying instead that it sets up internal cross-functional teams for elections in every market to assess the biggest risks, and make contact with the relevant electoral commission and other key stakeholders.

Again, it’s not possible for outsiders to assess the efficacy of such internal efforts. But Avaaz’s findings suggest Facebook’s risk assessment of Spain’s general election has had a pretty hefty blindspot when it comes to proactively picking up malicious attempts to inflate far right propaganda.

Yet, at the same time, a regional election in Andalusia late last year returned a shock result and warning signs — with the tiny (and previously unelected) far right party, Vox, gaining around 10 per cent of the vote to take 12 seats.

Avaaz’s findings vis-a-vis the three bogus far right networks suggest that as well as seeking to slur leftwing/liberal political views and parties some of the inauthentic pages were involved in actively trying to amplify Vox — with one bogus page, Orgullo Nacional España, sharing a pro-Vox Facebook page 155 times in a three month period. 

Avaaz used the Facebook-owned social media monitoring tool Crowdtangle to get a read on how much impact the fake networks might have had.

It found that while the three inauthentic far right Facebook networks produced just 3.7% of the posts in its Spanish elections dataset, they garnered an impressive 12.6% of total engagement over the three month period it pulled data on (between January 5 and April 8) — despite consisting of just 27 Facebook pages and groups out of a total of 910 in the full dataset. 

Or, to put it another way, a handful of bad actors managed to generate enough divisive politically charged noise that more than one in ten of those engaging in Spanish election chatter on Facebook, per its dataset, at very least took note.

It’s a finding which neatly illustrates that divisive content being more clickable is not at all a crazy idea — whatever the founder of Facebook once said.

WhatsApp adds a new privacy setting for groups in another effort to clamp down on fake news

WhatsApp today announced another protection for users in an effort to clamp down on the spread of fake news and misinformation. Through a new feature, users can control who has permission to add them to groups. The company says this will “help to limit abuse” and keep people’s phone numbers private. Related to this, the app will also introduce an invite system for those who enable the additional protections, allowing users to vet any incoming group invites before deciding to join.

The privacy setting arrives only a day after the Facebook-owned messaging app launched a fact-checking tipline in India, ahead of elections in the country.

Like other social platforms, WhatsApp has played a role in the spread of fake news. In Brazil, for example, the platform was flooded with falsehoods, conspiracy theories, and other misleading propaganda.

This sort of disinformation doesn’t always arrive through family and friends, but can also come in the form of group chats – in some cases, chats that users were added to against their will.

This is particularly true in one of WhatsApp’s biggest markets, India.

As The WSJ recently reported, India’s political parties often use the app to blast messages to groups organized by caste, income level and religion. The number of hoaxes have skyrocketed as WhatsApp parent Facebook clamped down on fake news. Reports of hoaxes that last year numbered in the dozens per day, having since grown to hundreds per day. And WhatsApp is now removing around 2 million suspicious accounts globally per month, the report said.

Putting users in control of how they’re added to groups could help some, but only if users are inspired to dig into the settings and make the change for themselves.

Ideally, this level of protection should be enabled as the default – not an optional choice.

To enable the new protection, users can go to Settings then tap Account > Privacy > Groups then choose one of the three options regarding who can add you to a group text: “Nobody,” “My Contacts,” or “Everybody.” “Nobody” means you’ll have to approve joining every group to which you’re invited, WhatsApp says, and “My Contacts” means only users you already know can add you to groups.

In the event that you change the setting to either “Nobody” or “My Contacts,” people inviting you to groups will be instead prompted to send a private invite through an individual chat. That way, you still have the option of joining a group even if the person inviting you isn’t one of your regular WhatsApp contacts. However, the invite will expire in 3 days if you don’t accept.

This is only one of several changes to WhatsApp made in recent months focused on reducing the spread of misinformation and fake news. The company last summer began to limit message forwarding, and marked forwarded messages with a label. It was also spotted testing a new spam message warning system.

WhatsApp says the new settings roll out to some users today, and will reach the rest of WhatsApp’s audience in the weeks ahead. The most recent version of the app will be required.

 

WhatsApp hits India’s Jio feature phones amidst fake news violence

False rumors forwarded on WhatsApp have led angry mobs to murder strangers in India, but the Facebook-owned chat app is still racing to add users in the country. Today it launched a feature phone version of WhatsApp for JioPhone 1 and 2’s KaiOS, which are designed to support 22 of India’s vast array of native languages. Users will be able to send text, photos, videos, and voice messages with end-to-end encryption, though it will lack advanced features like augmented reality and Snapchat Stories-style Status updates.

WhatsApp was supposed to launch alongside the JioPhone 2 that debuted last month for roughly $41, but was delayed. 40 million JioPhone 1s had already been sold, and it’s been estimated to control 27 percent of the Indian mobile phone market and 47 percent of the country’s feature phone market. Coming to JioPhone should open up a big new growth vector for WhatsApp as it strives to grow its 1.5 billion user count towards the big 2 billion milestone.

Meanwhile, it could make the Reliance-owned Jio mobile network more appealing. It could also strengthen the KaiOS operating system, developed by a San Diego startup of the same name that recently took a $22 million investment from Google. WhatsApp rolls out on the JioPhone AppStore today and should be available to everyone by September 20th, and we’ve asked if it will come to other KaiOS devices made by Nokia and Alcatel.

WhatsApp has scrambled to safeguard its app after numerous reports of rumors circulated on its app about gangs and child abductors led angry mobs to kill people in the streets. Five nomads were recently beaten to death in a rural village called Rainpada after residents watched inaccurate videos forwarded through WhatsApp about kidnappers supposedly rolling through the area, BuzzFeed reports.

WhatsApp recently limited how many people you can forward a message to, and began a radio PSA campaign in Hindi on 46 India stations warning people to verify things they hear on WhatsApp before acting on them. But it’s clear that parent company Facebook still sees spreading WhatsApp as part of its mission to bring the world closer together, even as that comes at a cost.

Jio’s “transition” phones that offer a few third-party apps but not full-fledged smartphone capabilities, alongside its affordable mobile data, have significantly reduced the cost and friction of being online in India. But with that access comes newfound dangers, especially if not combined with news literacy and digital skills education that could help users spot false information before it sparks violence.

Increasingly the tech world is learning that connecting people to the Internet also means connecting them to the worst elements of humanity. That will necessitate a new wave of pessimists and cynics as product managers in order to predict and thwart ways to abuse software instead of allowing idealists to blindly build tools that can be weaponized.