Tag Archives: New Zealand

Tinder adds sexual orientation and gender identity to its profiles

Tinder is adding to its profiles information about sexual orientation and gender identity.

The company worked with the LGBTQ advocacy organization GLAAD on changes to its dating app to make it more inclusive.

Users who want to edit or add more information about their sexual orientation can now simply edit their profile. When a Tinder user taps on the “orientation” selection they can choose up to three terms that describe their sexual orientation. Those descriptions can either be private or public, but will likely be used to inform matches on the app.

Tinder also updated the onboarding experience for new users so they can include their sexual orientation as soon as they sign up for the dating app.

Tinder is also giving users more control over how they order matches. In the “Discovery Preferences” field Tinderers can choose to first see people of the same orientation.

The company said this is a first step in its efforts to be more inclusive. The company will continue to work with GLAAD to refine its products and is making the new features available in the U.S., U.K., Canada, Ireland, India, Australia and New Zealand throughout June.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

Facebook says it filed a US lawsuit to shut down a follower-buying service in New Zealand

Facebook is cracking down on services that promise to help Instagram users buy themselves a large following on the photo app. The social network said today that it has filed a lawsuit against a New Zealand-based company that operates one such ‘follower-buying service.’

The suit is in a U.S. court and is targeting the three individuals running the company, which has not been named.

“The complaint alleges the company and individuals used different companies and websites to sell fake engagement services to Instagram users. We previously suspended accounts associated with the defendants and formally warned them in writing that they were in violation of our Terms of Use, however, their activity persisted,” Facebook wrote in a post.

We were not initially able to get a copy of the lawsuit, but have asked Facebook for further details.

This action comes months after a TechCrunch expose identified 17 follower-buyer services that were using Instagram’s own advertising network to peddle their wares to users of the service.

Instagram responded by saying it had removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services that we reported were violating its policies. However, just one day later, TechCrunch found advertising from two of the companies Instagram, while a further five were found to be paying to promote policy-violating follower-growth services.

Facebook has stepped up its efforts to crack down on “inauthentic behavior” on its platforms in recent months. That’s included removing accounts and pages from Facebook and Instagram in countries that include India, Pakistan, the Philippines, the U.K, Romania, Iran, Russia, Macedonia and Kosovo this year. Higher-profile action has included the suspension of removal of UK far-right activist Tommy Robinson from Facebook and in Myanmar, where Facebook has been much-criticized, the company banned four armed groups.

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

Facebook expands downvote tests on comments

Mark Twain had it right: There’s no such thing as a new idea. To wit: Facebook is currently testing arrows to let users ‘up’ vote or ‘down’ vote individual comments in some of its international markets. Digg eat your heart out. Reddit roll over.

This particular trial of upvoting/downvoting buttons is limited to New Zealand and Australia for now, according to Facebook (via The Guardian).

The latest test is a bit different to a downvote test Facebook ran in the US back in February — when it just offered a downvote option. (And if clicked it hid the comment and gave users additional reporting options such as: “Offensive”, “Misleading”, and “Off Topic”.)

The latest international test looks a bit less negative — with an overall score being recorded next to the arrows which could at least reward users with some positive feels if their comment gets lots of upvotes. Negative scores could do the opposite though.

It’s not certain whether the company will commit to rolling out the feature in this form — a spokesman told us this is an early test, with no decision made on whether to roll it out for Facebook’s 2.2BN+ user base — but its various tests in this area suggest it’s interested in having another signal for rating or ranking comments.

In a statement attributed to a spokesperson it told us: “People have told us they would like to see better public discussions on Facebook, and want spaces where people with different opinions can have more constructive dialogue.  To that end, we’re running a small test in New Zealand which allows people to upvote or downvote comments on public Page posts. Our hope is that this feature will make it easier for us to create such spaces, by ranking the comments that readers believe deserve to rank highest, rather than the comments that get the strongest emotional reaction.”

The test looks to have been going on for a couple of weeks at least at this point — a reader emailed TC on April 14 with screengrabs of the trial on comments for New Zealand Commonwealth Games content…

 

Facebook emphasized the feature is not an official dislike button. If rolled out a spokesman said it would not replace the suite of emoji reactions the platform already offers so people can record how they feel about an individual Facebook post (and reactions already include thumbs up/thumbs down emoji options).

Rather its focus is on giving users more granular controls to rate comments on Facebook posts.

The spokesman told us the feature test is intended to see if users find the upvote/downvote buttons a productive option to give feedback about how informative or constructive a comment is.

Facebook users with access to the trial who hover over a comment will see a pop-up box that explains how to use the feature, according to the Guardian — with usage of the up arrow encouraged via text telling them to “support better comments” and “press the comment up if you think the comment is helpful or insightful”; while the down arrow is explained as a way to “stop bad comments” and the further instruction to: “Press the down button if a comment has bad intentions or is disrespectful. It’s still ok to disagree in a respectful way.”

It’s likely Facebook is toying with using comment rating buttons to encourage a more broad crowdsourcing effort to help it with the myriad, complex content moderation challenges associated with its platform.

Responding quickly enough to hate speech remains a particular problem for it — and a hugely serious one in certain regions and markets.

In Myanmar, for example, the UN has accused the platform of accelerating ethnic violence by failing to prevent the spread of violent hate speech. Local groups have also blasted Facebook for failing to be responsive enough to the problem.

In a statement responding to a critical letter sent last month by Myanmar civil society organizations, Facebook conceded: “We should have been faster and are working hard to improve our technology and tools to detect and prevent abusive, hateful or false content.”

And while the company has said it will double the headcount of staff who work on safety and security issues, to 20,000 by the end of this year, that’s still a tiny drop in the ocean of content shared daily over its social platforms — so it’s likely looking at how it can co-opt more of the 2.2BN+ humans who use its platform to help it with the hard problems of sifting good comments from bad: A nuanced task which can baffle AI — so, tl;dr, the more human signals you can get the better.

Facebook just lost another user — New Zealand’s privacy commissioner

Mark Zuckerberg’s friend count continues to tick down in the face of a major data misuse scandal griping the company. The latest individual to #DeleteFacebook is no less than the privacy commissioner of New Zealand.

Writing in The Spinoff, John Edwards accuses Facebook of being non-compliant with the New Zealand Privacy Act — and urges other New Zealanders to follow his lead and ditch the social network.

He says he’s acting after a complaint that Facebook failed to provide a user in New Zealand with information it held on them.

“Every New Zealander has the right to find out what information an agency holds about them. It is a right of constitutional significance,” he writes. “Facebook failed to meet its obligations under the Privacy Act, and when given a statutory demand from my office to produce the information at issue so that I could discharge my statutory duty to the requester to review it, Facebook initially refused to provide it, and then asserted that Facebook was not subject to the New Zealand Privacy Act, and was therefore under no obligation to provide it.

“Our investigation was not able to proceed, and we notified the parties that while we were able to conclude that Facebook’s actions constituted an interference with privacy, and a failure to comply with its obligations both to the requester, and to my Office, there was nothing further we could do.”

Facebook’s strategy of arguing it is not under the jurisdiction of privacy laws in international markets is a standard play for the company which instructs its lawyers to argue it is only subject to Irish data protection law, given its international HQ is based in Ireland.

(NB: The geographical distance between Ireland and New Zealand is roughly 18,600km — a vast physical span that of course presents no barrier to Facebook’s digital business making money by mining personal data in New Zealand.)

The company’s ‘your local privacy rules don’t apply to our international business’ strategy appears to be on borrowed time, in Europe at least — with some European courts already feeling able to deny Facebook’s claim that Ireland be its one-stop shop for any/all international legal challenges.

The EU also has a major update to its data protection framework incoming, the GDPR, which will apply from May 25 — and which ramps up the liabilities for companies ignoring data protection rules by bringing in a new penalty regime that scales as high as 4% of a organizations global turnover (for Facebook that could mean fines as large as $1.6BN, based on the ~$40.6BN it earned last year — per its 2017 full year results).

And that’s all before you consider the huge public and political pressure now being brought to bear on the company over data handling and user privacy, as a result of the current data misuse scandal. Which has also wiped off billions in share value — and led to a bunch of lawsuits.

“We applied our naming policy and today have identified Facebook as non-compliant with the New Zealand Privacy Act in order to inform consumers of the non-compliance, the associated risks, and their options for protecting their data,” adds Edwards, joining the anti-Facebook pile-on.

“Under current law there is little more I am able to do to practically to protect my, or New Zealanders’ data on Facebook. I will continue to assert that Facebook is obliged to comply with New Zealand law in relation to personal information it holds and uses in relation to its New Zealand users, and in due course a case may come before the courts, either through my Office, or at the suit of the company.”

He goes on to suggest that the 2.5 million New Zealanders who use Facebook could consider modifying their settings and postings on the platform in light of its current non-compliant terms and conditions — or even delete their account altogether, linking to a page on the commission’s own website which explains how to delete a Facebook account.

So, er, ouch.

In response to the commissioner’s actions, Facebook has decided to try to brand the country’s privacy commissioner himself as, er, hostile to privacy…

A Facebook spokesperson emailed us the following statement:

We are disappointed that the New Zealand Privacy Commissioner asked us to provide access to a year’s worth of private data belonging to several people and then criticised us for protecting their privacy. We scrutinize all requests to disclose personal data, particularly the contents of private messages, and will challenge those that are overly broad. We have investigated the complaint from the person who contacted the Commissioner’s office but we haven’t been provided enough detail to fully resolve it. Instead, the Commissioner has made a broad and intrusive request for private data. We have a long history of working with the Commissioner, and we will continue to request information that will help us investigate this complaint further.

This of course is pure spin — and a very clunky attempt by Facebook to shift attention off the nub of the issue: Its own non-compliance with privacy laws outside its preferred legal jurisdictions.

Frankly it’s a very risky PR strategy at a time when it really has become impossible for Facebook to deny quite how comfortable the company was, up until mid 2015, to hand over reams of personal information on Facebookers to third party users of its developer platform — without requiring these external entities gain individual level consent (friends could ‘consent’ for all their friends!).

Hence the Cambridge Analytica scandal.

The non-compliance of Facebook with European data protection laws was in the spotlight yesterday, during an oral hearing in front of the UK parliamentary committee that’s looking into the Cambridge Analytica-Facebook data misuse scandal — as part of a wider enquiry into online disinformation and political campaigning.

Giving testimony to the committee as an expert witness Paul-Olivier Dehaye, the co-founder of PersonalData.IO — a startup service designed to help people control how their personal information is accessed by companies — recounted how he had spent “years” trying to obtain his personal information from Facebook.

Dehaye said his persistence in pressing the company eventually led it to build a tool that lets Facebook users obtain a subset list of advertisers who hold their contact information — though only for a rolling eight week period.

“I personally had 200 advertisers that had declared to Facebook that they had my consent to advertise. One of them is Booz Allen Hamilton, which is an information company,” Dehaye told the committee. “I don’t know how [BAH got my data]. I don’t know why they think they have my consent on this. Where that information comes from. I would be curious to ask.”

Asked whether he was surprised by the data Facebook held on him and also by the company’s reluctance to share this personal information, Dehaye said he had been surprised they “implemented something” — i.e. the tool that gives an eight week snapshot.

But he also argued this glimpse is illustrative because it underlines just how much Facebook still isn’t telling users.

“They implicitly acknowledge that yes they should disclose that information,” said Dehaye, adding: “You have to think that these databases are probably trawled through by a tonne of intelligence services to now figure out what happened in all those different circumstances. And also by Facebook itself to assess what happened.”

“Facebook is invoking an exception in Irish law in the data protection law — involving, ‘disproportionate effort’. So they’re saying it’s too much of an effort to give me access to this data. I find that quite intriguing because they’re making essentially a technical and a business argument for why I shouldn’t be given access to this data — and in the technical argument they’re in a way shooting themselves in the foot. Because what they’re saying is they’re so big that there’s no way they could provide me with this information. The cost would be too large.

“It’s not just about their user base being so large — if you parse their argument, it’s about the number of communications that are exchanged. And usually that’s taken of a measure of dominance of a communication medium. So they are really arguing ‘we are too big to comply with data protection law’. The costs would be too high for us. Which is mindboggling that they wouldn’t see the direction they’re going there. Do they really want to make that argument?”

“They don’t price the cost itself,” he added. “They don’t say it would cost us this much [to comply with the data request]. If they were starting to put a cost on getting your data out of Facebook — you know, every tiny point of data — that would be very interesting to have to compare with smaller companies, smaller social networks. If you think about how antitrust laws work, that’s the starting point for those laws. So it’s kind of mindboggling that they don’t see their argumentation, how it’s going to hurt them at some point.”