Tag Archives: california

‘The Great Hack’: Netflix doc unpacks Cambridge Analytica, Trump, Brexit and democracy’s death

It’s perhaps not for nothing that The Great Hack – the new Netflix documentary about the connections between Cambridge Analytica, the US election and Brexit, out on July 23 – opens with a scene from Burning Man. There, Brittany Kaiser, a former employee of Cambridge Analytica, scrawls the name of the company onto a strut of ‘the temple’ that will eventually get burned in that fiery annual ritual. It’s an apt opening.

There are probably many of us who’d wish quite a lot of the last couple of years could be thrown into that temple fire, but this documentary is the first I’ve seen to expertly unpick what has become the real-world dumpster fire that is social media, dark advertising and global politics which have all become inextricably, and, often fatally, combined.

The documentary is also the first that you could plausibly recommend those of your relatives and friends who don’t work in tech, as it explains how social media – specifically Facebook – is now manipulating our lives and society, whether we like it or not.

As New York Professor David Carroll puts it at the beginning, Facebook gives “any buyer direct access to my emotional pulse” – and that included political campaigns during the Brexit referendum and the Trump election. Privacy campaigner Carroll is pivotal to the film’s story of how our data is being manipulated and essentially kept from us by Facebook.

The UK’s referendum decision to leave the European Union, in fact, became “the petri dish” for a Cambridge Analytica experiment, says Guardian journalist Carole Cadwalladr She broke the story of how the political consultancy, led by Eton-educated CEO Alexander Nix, applied techniques normally used by ‘psyops’ operatives in Afghanistan to the democratic operations of the US and UK, and many other countries, over a chilling 20+ year history. Watching this film, you literally start to wonder if history has been warped towards a sickening dystopia.

carole

The petri-dish of Brexit worked. Millions of adverts, explains the documentary, targeted individuals, exploiting fear and anger, to switch them from ‘persuadables’, as CA called them, into passionate advocates for, first Brexit in the UK, and then Trump later on.

Switching to the US, the filmmakers show how CA worked directly with Trump’s “Project Alamo” campaign, spending a million dollars a day on Facebook ads ahead of the 2016 election.

The film expertly explains the timeline of how CA had first worked off Ted Cruz’s campaign, and nearly propelled that lack-luster candidate into first place in the Republican nominations. It was then that the Trump campaign picked up on CA’s military-like operation.

After loading up the psychographic survey information CA had obtained from Aleksandr Kogan, the Cambridge University academic who orchestrated the harvesting of Facebook data, the world had become their oyster. Or, perhaps more accurately, their oyster farm.

Back in London, Cadwalladr notices triumphant Brexit campaigners fraternizing with Trump and starts digging. There is a thread connecting them to Breitbart owner Steve Bannon. There is a thread connecting them to Cambridge Analytica. She tugs on those threads and, like that iconic scene in ‘The Hurt Locker’ where all the threads pull-up unexploded mines, she starts to realize that Cambridge Analytica links them all. She needs a source though. That came in the form of former employee Chris Wylie, a brave young man who was able to unravel many of the CA threads.

But the film’s attention is often drawn back to Kaiser, who had worked first on US political campaigns and then on Brexit for CA. She had been drawn to the company by smooth-talking CEO Nix, who begged: “Let me get you drunk and steal all of your secrets.”

But was she a real whistleblower? Or was she trying to cover her tracks? How could someone who’d worked on the Obama campaign switch to Trump? Was she a victim of Cambridge Analytica, or one of its villains?

British political analyst Paul Hilder manages to get her to come to the UK to testify before a parliamentary inquiry. There is high drama as her part in the story unfolds.

Kaiser appears in various guises which vary from idealistically naive to stupid, from knowing to manipulative. It’s almost impossible to know which. But hearing about her revelation as to why she made the choices she did… well, it’s an eye-opener.

brit

Both she and Wylie have complex stories in this tale, where not everything seems to be as it is, reflecting our new world, where truth is increasingly hard to determine.

Other characters come and go in this story. Zuckerburg makes an appearance in Congress and we learn of the casual relationship Facebook had to its complicity in these political earthquakes. Although if you’re reading TechCrunch, then you will probably know at least part of this story.

Created for Netflix by Jehane Noujaim and Karim Amer, these Egyptian-Americans made “The Square”, about the Egyptian revolution of 2011. To them, the way Cambridge Analytica applied its methods to online campaigning was just as much a revolution as Egyptians toppling a dictator from Cario’s iconic Tahrir Square.

For them, the huge irony is that “psyops”, or psychological operations used on Muslim populations in Iraq and Afghanistan after the 9/11 terrorist attacks ended up being used to influence Western elections.

Cadwalladr stands head and shoulders above all as a bastion of dogged journalism, even as she is attacked from all quarters, and still is to this day.

What you won’t find out from this film is what happens next. For many, questions remain on the table: What will happen now Facebook is entering Cryptocurrency? Will that mean it could be used for dark election campaigning? Will people be paid for their votes next time, not just in Likes? Kaiser has a bitcoin logo on the back of her phone. Is that connected? The film doesn’t comment.

But it certainly unfolds like a slow-motion car crash, where democracy is the car and you’re inside it.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

After criticism over moderator treatment, Facebook raises wages and boosts support for contractors

Facebook has been repeatedly (and rightly) hammered for its treatment of the content moderators who ensure the site doesn’t end up becoming a river of images, videos and articles embodying the worst of humanity.

Those workers, and the hundreds (if not thousands) of other contractors Facebook employs to cook food, provide security and provide transportation for the social media giant’s highly compensated staff, are getting a little salary boost and a commitment to better care for the toll these jobs can take on some workers.

“Today we’re committing to pay everyone who does contract work at Facebook in the US a wage that’s more reflective of local costs of living,” the company said in a statement. “And for those who review content on our site to make sure it follows our community standards, we’re going even further. We’re going to provide them a higher base wage, additional benefits, and more supportive programs given the nature of their jobs.”

Contractors in the U.S. were being paid a $15 minimum wage, received 15 paid days off for holidays, sick time and vacation; and received a $4,000 new child benefit for parents that don’t receive paid leave. Since 2016, Facebook also required employees assigned to the company to be provided with comprehensive healthcare.

Now, it’s boosting those wages in San Francisco, Washington, New York and the San Francisco Bay Area to a $20 minimum wage, and $18 in Seattle.

“After reviewing a number of factors including third-party guidelines, we’re committing to a higher standard that better reflects local costs of living,” the company said. “We’ll be implementing these changes by mid-next year and we’re working to develop similar standards for other countries.”

Those raises apply to contractors that don’t work on content moderation. For contractors involved in moderation, the company committed to a $22 per hour minimum wage in the Bay Area, New York and Washington; $20 per-hour in Seattle; and $18 per hour in other metro areas outside the U.S.

Facebook also said it will institute a similar program for international standards going forward. That’s important, as a bulk of the company’s content moderation work is actually done overseas, in places like the Philippines.

Content moderators will also have access to “ongoing well-being and resiliency training.” Facebook also said it was adding preferences to let reviewers customize how they want to view content — including an option to blur graphic images by default before reviewing them. Facebook will also provide around-the-clock on-site counseling, and will survey moderators at partner sites about what reviewers actually need.

Last month, the company said it convened its first vendor partner summit at its Menlo Park, Calif. offices and is now working to standardize contracts with its global vendors. To ensure that vendors are meeting their commitments, the company is going to hold unannounced onsite checks and a biannual audit and compliance program for content review teams.

Virtual Instagram celebrity ‘Lil Miquela’ has had her account hacked

The Instagram account for the virtual celebrity known as Lil Miquela has been hacked.

The multi-racial fashionista and advocate for multiculturalism, whose account is followed by nearly 1 million people, has had “her” account taken over by another animated Instagram account holder named “Bermuda.”

Welcome to the spring of 2018.

The hack of the @Lilmiquela account started earlier today, but the Bermuda avatar has long considered Miquela her digital nemesis and has taken steps to hack other of Miquela’s social accounts — like Spotify — before.

Because this is the twenty-first century — and given the polarization of the current political climate — it’s not surprising that the very real culture wars between proponents of pluralism and the Make America Great Again movement would take their fight to feuding avatars.

In posts on the Lil Maquela account, Bermuda proudly flaunts her artificial identity… and a decidedly pro-Trump message.

Unlike Miquela, whose account plays with the notion of a physical presence for a virtual avatar, Bermuda is very clearly a simulation. And one with political views that are diametrically opposed to those espoused by Miquela (whose promotion of openness and racial equality has been a feature that’s endeared the account to followers and fashion and culture magazines alike).

Miquela Sousa, a Brazilian-American from Downey, Calif., launched her Instagram account in 2016. Since the account’s appearance, Miquela has been a subject of speculation in the press and online.

Appearing on magazine covers, and consenting to do interviews with reporters, Miquela has been exploring notions of celebrity, influence and culture since her debut on Facebook’s new most popular social media site.

A person familiar with the Lil Miquela account said that Instagram was working on regaining control.

Snapchat’s Snap Map comes to the web, including in embeddable form

 The Snap Map is a feature that received a mixed response when it landed in the Snapchat app, since it basically let you see where all your friends on the platform were at any given time – provided they were okay with sharing that info. Now, there’s a version of Snap Map available for anyone to view on the web, but it’s less about checking out where your pals are at, and… Read More

Snapchat will now let you share some Stories outside the app

 There’s no getting around the fact that Snapchat has a user growth problem, so it’s smart that the company is making it easier for people who like and use Snapchat to share content they find within beyond the app itself. Today, Snap is launching the ability to share some public Stories via links that then display the Story selected on Snapchat.com. Stories eligible for sharing… Read More

Snap falls below its IPO price for the first time

 Oh Snap. The Snapchat parent had a difficult day on the stock market, closing at $16.99. It’s officially fallen below its $17 IPO price for the first time. This is significant because it means that overall, public investors have lost money on the company since its March IPO. A money-losing reputation can be hard to recover from. Read More