Tag Archives: United States

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.

The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.

Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.

The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.

Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.

In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.

The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.

The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.

Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.

The most significant language from the decision from the circuit court seems to be this:

We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”

As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.

“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”

That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.

As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.

Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.

Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.

“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”

These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Facebook still full of groups trading fake reviews, says consumer group

Facebook has failed to clean up the brisk trade in fake product reviews taking place on its platform, an investigation by the consumer association Which? has found.

In June both Facebook and eBay were warned by the UK’s Competition and Markets Authority (CMA) they needed to do more to tackle the sale of fake product reviews. On eBay sellers were offering batches of five-star product reviews in exchange for cash, while Facebook’s platform was found hosting multiple groups were members solicited writers of fake reviews in exchange for free products or cash (or both).

A follow-up look at the two platforms by Which? has found a “significant improvement” in the number of eBay listings selling five-star reviews — with the group saying it found just one listing selling five-star reviews after the CMA’s intervention.

But little appears to have been done to prevent Facebook groups trading in fake reviews — with Which? finding dozens of Facebook groups that it said “continue to encourage incentivised reviews on a huge scale”.

Here’s a sample ad we found doing a ten-second search of Facebook groups… (one of a few we saw that specify they’re after US reviewers)

Screenshot 2019 08 06 at 09.53.19

Which? says it found more than 55,000 new posts across just nine Facebook groups trading fake reviews in July, which it said were generating hundreds “or even thousands” of posts per day.

It points out the true figure is likely to be higher because Facebook caps the number of posts it quantifies at 10,000 (and three of the ten groups had hit that ceiling).

Which? also found Facebook groups trading fake reviews that had sharply increased their membership over a 30-day period, adding that it was “disconcertingly easy to find dozens of suspicious-looking groups in minutes”.

We also found a quick search of Facebook’s platform instantly serves a selection of groups soliciting product reviews…

Screenshot 2019 08 06 at 09.51.09

Which? says looked in detail at ten groups (it doesn’t name the groups), all of which contained the word ‘Amazon’ in their group name, finding that all of them had seen their membership rise over a 30-day period — with some seeing big spikes in members.

“One Facebook group tripled its membership over a 30-day period, while another (which was first started in April 2018) saw member numbers double to more than 5,000,” it writes. “One group had more than 10,000 members after 4,300 people joined it in a month — a 75% increase, despite the group existing since April 2017.”

Which? speculates that the surge in Facebook group members could be a direct result of eBay cracking down on fake reviews sellers on its own platform.

“In total, the 10 [Facebook] groups had a staggering 105,669 members on 1 August, compared with a membership of 85,647 just 30 days prior to that — representing an increase of nearly 19%,” it adds.

Across the ten groups it says there were more than 3,500 new posts promoting inventivised reviews in a single day. Which? also notes that Facebook’s algorithm regularly recommended similar groups to those that appeared to be trading in fake reviews — on the ‘suggested for you’ page.

It also says it found admins of groups it joined listing alternative groups to join in case the original is shut down.

Commenting in a statement, Natalie Hitchins, Which?’s head of products and services, said: ‘Our latest findings demonstrate that Facebook has systematically failed to take action while its platform continues to be plagued with fake review groups generating thousands of posts a day.

“It is deeply concerning that the company continues to leave customers exposed to poor-quality or unsafe products boosted by misleading and disingenuous reviews. Facebook must immediately take steps to not only address the groups that are reported to it, but also proactively identify and shut down other groups, and put measures in place to prevent more from appearing in the future.”

“The CMA must now consider enforcement action to ensure that more is being done to protect people from being misled online. Which? will be monitoring the situation closely and piling on the pressure to banish these fake review groups,” she added.

Responding to Which?‘s findings in a statement, CMA senior director George Lusty said: “It is unacceptable that Facebook groups promoting fake reviews seem to be reappearing. Facebook must take effective steps to deal with this problem by quickly removing the material and stop it from resurfacing.”

“This is just the start – we’ll be doing more to tackle fake and misleading online reviews,” he added. “Lots of us rely on reviews when shopping online to decide what to buy. It is important that people are able to trust they are genuine, rather than something someone has been paid to write.”

In a statement Facebook claimed it has removed 9 out of ten of the groups Which? reported to it and claimed to be “investigating the remaining group”.

“We don’t allow people to use Facebook to facilitate or encourage false reviews,” it added. “We continue to improve our tools to proactively prevent this kind of abuse, including investing in technology and increasing the size of our safety and security team to 30,000.”

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

‘The Great Hack’: Netflix doc unpacks Cambridge Analytica, Trump, Brexit and democracy’s death

It’s perhaps not for nothing that The Great Hack – the new Netflix documentary about the connections between Cambridge Analytica, the US election and Brexit, out on July 23 – opens with a scene from Burning Man. There, Brittany Kaiser, a former employee of Cambridge Analytica, scrawls the name of the company onto a strut of ‘the temple’ that will eventually get burned in that fiery annual ritual. It’s an apt opening.

There are probably many of us who’d wish quite a lot of the last couple of years could be thrown into that temple fire, but this documentary is the first I’ve seen to expertly unpick what has become the real-world dumpster fire that is social media, dark advertising and global politics which have all become inextricably, and, often fatally, combined.

The documentary is also the first that you could plausibly recommend those of your relatives and friends who don’t work in tech, as it explains how social media – specifically Facebook – is now manipulating our lives and society, whether we like it or not.

As New York Professor David Carroll puts it at the beginning, Facebook gives “any buyer direct access to my emotional pulse” – and that included political campaigns during the Brexit referendum and the Trump election. Privacy campaigner Carroll is pivotal to the film’s story of how our data is being manipulated and essentially kept from us by Facebook.

The UK’s referendum decision to leave the European Union, in fact, became “the petri dish” for a Cambridge Analytica experiment, says Guardian journalist Carole Cadwalladr She broke the story of how the political consultancy, led by Eton-educated CEO Alexander Nix, applied techniques normally used by ‘psyops’ operatives in Afghanistan to the democratic operations of the US and UK, and many other countries, over a chilling 20+ year history. Watching this film, you literally start to wonder if history has been warped towards a sickening dystopia.

carole

The petri-dish of Brexit worked. Millions of adverts, explains the documentary, targeted individuals, exploiting fear and anger, to switch them from ‘persuadables’, as CA called them, into passionate advocates for, first Brexit in the UK, and then Trump later on.

Switching to the US, the filmmakers show how CA worked directly with Trump’s “Project Alamo” campaign, spending a million dollars a day on Facebook ads ahead of the 2016 election.

The film expertly explains the timeline of how CA had first worked off Ted Cruz’s campaign, and nearly propelled that lack-luster candidate into first place in the Republican nominations. It was then that the Trump campaign picked up on CA’s military-like operation.

After loading up the psychographic survey information CA had obtained from Aleksandr Kogan, the Cambridge University academic who orchestrated the harvesting of Facebook data, the world had become their oyster. Or, perhaps more accurately, their oyster farm.

Back in London, Cadwalladr notices triumphant Brexit campaigners fraternizing with Trump and starts digging. There is a thread connecting them to Breitbart owner Steve Bannon. There is a thread connecting them to Cambridge Analytica. She tugs on those threads and, like that iconic scene in ‘The Hurt Locker’ where all the threads pull-up unexploded mines, she starts to realize that Cambridge Analytica links them all. She needs a source though. That came in the form of former employee Chris Wylie, a brave young man who was able to unravel many of the CA threads.

But the film’s attention is often drawn back to Kaiser, who had worked first on US political campaigns and then on Brexit for CA. She had been drawn to the company by smooth-talking CEO Nix, who begged: “Let me get you drunk and steal all of your secrets.”

But was she a real whistleblower? Or was she trying to cover her tracks? How could someone who’d worked on the Obama campaign switch to Trump? Was she a victim of Cambridge Analytica, or one of its villains?

British political analyst Paul Hilder manages to get her to come to the UK to testify before a parliamentary inquiry. There is high drama as her part in the story unfolds.

Kaiser appears in various guises which vary from idealistically naive to stupid, from knowing to manipulative. It’s almost impossible to know which. But hearing about her revelation as to why she made the choices she did… well, it’s an eye-opener.

brit

Both she and Wylie have complex stories in this tale, where not everything seems to be as it is, reflecting our new world, where truth is increasingly hard to determine.

Other characters come and go in this story. Zuckerburg makes an appearance in Congress and we learn of the casual relationship Facebook had to its complicity in these political earthquakes. Although if you’re reading TechCrunch, then you will probably know at least part of this story.

Created for Netflix by Jehane Noujaim and Karim Amer, these Egyptian-Americans made “The Square”, about the Egyptian revolution of 2011. To them, the way Cambridge Analytica applied its methods to online campaigning was just as much a revolution as Egyptians toppling a dictator from Cario’s iconic Tahrir Square.

For them, the huge irony is that “psyops”, or psychological operations used on Muslim populations in Iraq and Afghanistan after the 9/11 terrorist attacks ended up being used to influence Western elections.

Cadwalladr stands head and shoulders above all as a bastion of dogged journalism, even as she is attacked from all quarters, and still is to this day.

What you won’t find out from this film is what happens next. For many, questions remain on the table: What will happen now Facebook is entering Cryptocurrency? Will that mean it could be used for dark election campaigning? Will people be paid for their votes next time, not just in Likes? Kaiser has a bitcoin logo on the back of her phone. Is that connected? The film doesn’t comment.

But it certainly unfolds like a slow-motion car crash, where democracy is the car and you’re inside it.

UK-based women’s networking and private club, AllBright, raises $18.8 million as it expands into the US

AllBright, the London-based women’s membership club backed by private real estate investment firm Cain International, has raised $18.8 million to expand into the U.S.

The company’s new round was led by Cain International and was designed to take AllBright into three U.S. locations — Los Angeles, New York and Washington, DC.

The company said that the new facilities would be opening in the coming months.

Coupled with the launch of a new networking application called AllBright Connect and the company’s AllBright Magazine, the women’s networking organization is on a full-on media blitz.

Other investors in the round include Allan Leighton, who serves as the company’s non-executive chairman; Gail Mandel, who acquired Love Home Swap (a company founded by AllBright’s co-founder Debbie Wosskow); Stephanie Daily Smith, a former finance director to Hillary Clinton; and Darren Throop, the founder, president and chief executive of Entertainment One.

A spokesperson for the company said that the new financing would value the company at roughly $100 million.

The club’s current members include actors, members of the House of Lords and other fancy pants, high-falutin folks from the worlds of politics, business and entertainment.

The club’s first American location will be in West Hollywood, and is slated to open in September 2019. The largest club, in Mayfair, has five floors and boasts more than 12,000 square feet and features rooftop terraces, a dedicated space for coaching and mentoring, a small restaurant and a bar.

How startups can make influencer marketing work on a budget

Influencer Marketing has ballooned into a $25 billion industry, yet many marketing managers are left confused by this, because for them, it’s really not delivering the results to justify the hype.

Here’s the thing. Influencer marketing is not a one-size-fits-all marketing strategy such as Facebook or Adwords advertising. Each company needs to take a closer look at what influencer marketing can achieve, where it falls down, and how you can do a better job with this latest form of marketing that delivers, on average, $6.50 of value for every $1 spent.

The analysis below relies on clients and case studies from our experience at OpenSponsorship.com (my company) which is the largest marketplace connecting brands with over 5500 professional athletes for marketing campaigns.

With over 3500 deals to date across clients as big as Vitamin Shoppe and Anheuser Busch, established players like Jabra and Project Repat, and new startups like Brazyn and Gutzy, we have seen a lot go wrong (who knew you could disable comments on a post!) and a lot go right (an unknown skiier’s $100 Instagram, posted right before the Winter Olympics, going viral after he won the Silver)!!

Thanks to our in-house data experts, integrations with IBM Watson, robust ROI tracking tools and 10 years+ of experience combining the learnings of sports sponsorship with influencer marketing, we have gained extensive insights into campaign strategies.

We will share our learnings about what criteria to consider when choosing the best influencer to work with, figuring out how much to pay the influencer, what rights to ask for in the deal, what terms and conditions are reasonable and how to track ROI for the deal.

image2 1


Table of Contents


Who is the right influencer? 

At OpenSponsorship, we match brands with athletes for marketing campaigns, with a view to further expand into other areas of media and entertainment such as music artists, comedians, actors. Even within the athlete world, there is the concept of micro-influencers such as yogis, triathletes, marathon runners, all the way to macro-influencers such as NFL Quarterbacks, starting NBA point guards and everything in between.

Our 3 recommendations for picking the right influencers are:

Facebook’s content oversight board plan is raising more questions than it answers

Facebook has produced a report summarizing feedback it’s taken in on its idea of establishing a content oversight board to help arbitrate on moderation decisions.

Aka the ‘supreme court of Facebook’ concept first discussed by founder Mark Zuckerberg last year, when he told Vox:

[O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Facebook has since suggested the oversight board will be up and running later this year. And has just wheeled out its global head of policy and spin for a European PR push to convince regional governments to give it room for self-regulation 2.0, rather than slapping it with broadcast-style regulations.

The latest report, which follows a draft charter unveiled in January, rounds up input fed to Facebook via six “in-depth” workshops and 22 roundtables convened by Facebook and held in locations of its choosing around the world.

In all, Facebook says the events were attended by 650+ people from 88 different countries — though it further qualifies that by saying it had “personal discussions” with more than 250 people and received more than 1,200 public consultation submissions.

“In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed,” Facebook writes.

It goes without saying that this input represents a minuscule fraction of the actual ‘population’ of Facebook’s eponymous platform, which now exceeds 2.2BN accounts (an unknown portion of which will be fake/duplicates), while its operations stretch to more than double the number of markets represented by individuals at the events.

The feedback exercise — as indeed the concept of the board itself — is inevitably an exercise in opinion abstraction. Which gives Facebook leeway to shape the output as it prefers. (And, indeed, the full report notes that “some found this public consultation ‘not nearly iterative enough, nor transparent enough, to provide any legitimacy’ to the process of creating the Board”.)

In a blog post providing its spin on the “global feedback and input”, Facebook culls three “general themes” it claims emerged from the various discussions and submissions — namely that: 

  • People want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties, writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists
  • How the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public are key considerations — though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed
  • People also want a board that’s “as diverse as the many people on Facebook and Instagram” — the problem being that’s clearly impossible, given the planet-spanning size of Facebook platforms. Another desire Facebook highlights is for the board to be able to encourage it to make “better, more transparent decisions”. The need for board decisions (and indeed decisions Facebook takes when setting up the board) to be transparent emerges as a major theme in the report. In terms of the board’s make-up, Facebook says it should comprise experts with different backgrounds, different disciplines, and different viewpoints — “who can all represent the interests of a global community”. Though there’s clearly going to be differing views on how or even whether that’s possible to achieve; and therefore questions over how a 40-odd member body, that will likely rarely sit in plenary, can plausibly act as an prism for Facebook’s user-base

The report is worth reading in full to get a sense of the broad spectrum of governance questions and conundrums Facebook is here wading into.

If, as it very much looks, this is a Facebook-configured exercise in blame spreading for the problems its platform hosts, the surface area for disagreement and dispute will clearly be massive — and from the company’s point of view that already looks like a win. Given how, since 2016, Facebook (and Zuckerberg) have been the conduit for so much public and political anger linked to the spreading and accelerating of harmful online content.

Differing opinions and will also provide cover for Facebook to justify starting “narrow”. Which it has said it will do with the board, aiming to have something up and running by the end of this year. But that just means it’ll be managing expectations of how little actual oversight will flow right from the very start.

The report also shows that Facebook’s claimed ‘listening ear’ for a “global perspective” has some very hard limits.

So while those involved in the consultation are reported to have repeatedly suggested the oversight board should not just be limited to content judgement — but should also be able to make binding decisions related to things like Facebook’s newsfeed algorithm or wider use of AI by the company — Facebook works to shut those suggestions down, underscoring the scope of the oversight will be limited to content.

“The subtitle of the Draft Charter — “An Oversight Board for Content Decisions” — made clear that this body would focus specifically on content. In this regard, Facebook has been relatively clear about the Board’s scope and remit,” it writes. “However, throughout the consultation period, interlocutors often proposed that the Board hear a wide range of controversial and emerging issues: newsfeed ranking, data privacy, issues of local law, artificial intelligence, advertising policies, and so on.”

It goes on to admit that “the question persisted: should the Board be restricted to content decisions only, without much real influence over policy?” — before picking a selection of responses that appear intended to fuzz the issue, allowing it to position itself as seeking a reasoned middle ground.

“In the end, balance will be needed; Facebook will need to resolve tensions between minimalist and maximalist visions of the Board,” it concludes. “Above all, it will have to demonstrate that the Oversight Board — as an enterprise worth doing — adds value, is relevant, and represents a step forward from content governance as it stands today.”

Sample cases the report suggests the board could review — as suggested by participants in Facebook’s consultation — include:

  • A user shared a list of men working in academia, who were accused of engaging in inappropriate behavior and/or abuse, including unwanted sexual advances;
  • A Page that commonly uses memes and other forms of satire shared posts that used discriminatory remarks to describe a particular demographic group in India;
  • A candidate for office made strong, disparaging remarks to an unknown passerby regarding their gender identity and livestreamed the interaction. Other users reported this due to safety concerns for the latter person;
  • A government official suggested that a local minority group needed to be cautious, comparing that group’s behavior to that of other groups that have faced genocide

So, again, it’s easy to see the kinds of controversies and indeed criticisms that individuals sitting on Facebook’s board will be opening themselves up to — whichever way their decisions fall.

A content review board that will inevitably remain linked to (if not also reimbursed via) the company that establishes it, and will not be granted powers to set wider Facebook policy — but will instead be tasked with facing the impossible of trying to please all of the Facebook users (and critics) all of the time — does certainly risk looking like Facebook’s stooge; a conduit for channeling dirty and political content problems that have the potential to go viral and threaten its continued ability to monetize the stuff that’s uploaded to its platforms.

Facebook’s preferred choice of phrase to describe its users — “global community” — is a tellingly flat one in this regard.

The company conspicuously avoids talk of communities, pluralinstead the closest we get here is a claim that its selective consultation exercise is “ensuring a global perspective”, as if a singular essence can somehow be distilled from a non-representative sample of human opinion — when in fact the stuff that flows across its platforms is quite the opposite; multitudes of perspectives from individuals and communities whose shared use of Facebook does not an emergent ‘global community’ make.

This is why Facebook has struggled to impose a single set of ‘community standards’ across a platform that spans so many contexts; a one-size-fits all approach very clearly doesn’t fit.

Yet it’s not at all clear how Facebook creating yet another layer of content review changes anything much for that challenge — unless the oversight body is mostly intended to act as a human shield for the company itself, putting a firewall between it and certain highly controversial content; aka Facebook’s supreme court of taking the blame on its behalf.

Just one of the difficult content moderation issues embedded in the businesses of sociotechnical, planet-spanning social media platform giants like Facebook — hate speech — defies a top-down ‘global’ fix.

As Evelyn Douek wrote last year vis-a-via hate speech on the Lawfare blog, after Zuckerberg had floated the idea of a governance structure for online speech: “Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent… This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second.”

A cynic would say Facebook knows it can’t fix planet-scale content moderation and still turn a profit. So it needs a way to distract attention and shift blame.

If it can get enough outsiders to buy into its oversight board — allowing it to pass off the oxymoron of “global governance”, via whatever self-styled structure it allows to emerge from these self-regulatory seeds — the company’s hope must be that the device also works as a bolster against political pressure.

Both over particular problem/controversial content, and also as a vehicle to shrink the space for governments to regulate Facebook.

In a video discussion also embedded in Facebook’s blog post — in which Zuckerberg couches the oversight board project as “a big experiment that we hope can pioneer a new model for the governance of speech on the Internet” — the Facebook founder also makes reference to calls he’s made for more regulation of the Internet. As he does so he immediately qualifies the statement by blending state regulation with industry self-regulation — saying the kind of regulation he’s asking for is “in some cases by democratic process, in other cases through independent industry process”.

So Zuckerberg is making a clear pitch to position Facebook as above the rule of nation state law — and setting up a “global governance” layer is the self-serving vehicle of choice for the company to try and overtake democracy.

Even if Facebook’s oversight board’s structure is so cunningly fashioned as to present to a rationally minded individual as, in some senses, ‘independent’ from Facebook, its entire being and function will remain dependent on Facebook’s continued existence.

Whereas if individual markets impose their own statutory regulations on Internet platforms, based on democratic and societal principles, Facebook will have no control over the rules they impose, direct or otherwise — with uncontrolled compliance costs falling on its business.

It’s easy to see which model sits most easily with Zuckerberg the businessman — a man who has also demonstrated he will not be held personally accountable for what happens on his platform.

Not when he’s asked by one (non-US) parliament, nor even by representatives from nine parliaments — all keen to discuss the societal fallouts of political disinformation and hate speech spread and accelerated on Facebook.

Turns out that’s not the kind of ‘global perspective’ Facebook wants to sell you.