Tag Archives: Software

Twitter blocks state-controlled media outlets from advertising on its social network

Twitter is now blocking state-run media outlets from advertising on its platform.

The new policy was announced just hours after the company identified an information operation involving hundreds of accounts linked to China as part of an effort to “sow political discord” around events in Hong Kong after weeks of protests in the region. Over the weekend more than 1 million Hong Kong residents took to the streets to protest what they see as an encroachment by the mainland Chinese government over their rights.

State-funded media enterprises that do not rely on taxpayer dollars for their financing and don’t operate independently of the governments that finance them will no longer be allowed to advertise on the platform, Twitter said in a statement. That leaves a big exception for outlets like the Associated Press, the British Broadcasting Corp., Public Broadcasting Service and National Public Radio, according to reporting from BBC reporter, Dave Lee.

The affected accounts will be able to use Twitter, but can’t access the company’s advertising products, Twitter said in a statement.

“We believe that there is a difference between engaging in conversation with accounts you choose to follow and the content you see from advertisers in your Twitter experience which may be from accounts you’re not currently following. We have policies for both but we have higher standards for our advertisers,” Twitter said in its statement.

The policy applies to news media outlets that are financially or editorially controlled by the state, Twitter said. The company said it will make its policy determinations on the basis of media freedom and independence, including editorial control over articles and video, the financial ownership of the publication, the influence or interference governments may exert over editors, broadcasters and journalists, and political pressure or control over the production and distribution process.

Twitter said the advertising rules wouldn’t apply to entities that are focused on entertainment, sports or travel, but if there’s news in the mix, the company will block advertising access.

Affected outlets have 30 days before they’re removed from Twitter and the company is halting all existing campaigns.

State media has long been a source of disinformation and was cited as part of the Russian campaign to influence the 2016 election. Indeed, Twitter has booted state-financed news organizations before. In October 2017, the company banned Russia Today and Sputnik from advertising on its platform (although a representative from RT claimed that Twitter encouraged it to advertise ahead of the election).

 

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.

The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.

Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.

The most significant language from the decision from the circuit court seems to be this:

We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”

As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.

“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”

That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.

As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.

Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.

Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.

“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”

These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Instagram and Facebook are experiencing outages

Users reported issues with Instagram and Facebook Sunday morning.

[Update as of 12:45 p.m. pacific] Facebook says the outage affecting its apps has been resolved.

“Earlier today some people may have had trouble accessing the Facebook family of apps due to a networking issue. We have resolved the issue and are fully back up, we apologize for the inconvenience,” a Facebook company spokesperson said in a statement provided to TechCrunch.

The mobile apps wouldn’t load for many users beginning in the early hours of the morning, prompting thousands to take to Twitter to complain about the outage. #facebookdown and #instagramdown are both trending on Twitter at time of publish.

Twitter to attempt to address conversation gaps caused by hidden tweets

Twitter’s self-service tools when it comes to blocking content you don’t want to see, as well as a growing tendency for users to delete a lot of the content they post, is making some of the conversations on the platform look like Swiss cheese. The company says it will introduce added “context” on content that’s unavailable in conversations in the next few weeks, however, to help make these gaps at least less mystifying.

There are any number of reasons why tweets in a conversation you stumble upon might not be viewable, including that a poster has a private account, that the tweet was taken down due to a policy violation, that it was deleted after the fact or that specific keywords are muted by a user and present in those posts.

Twitter’s support account notes that the fix will involve providing “more context” alongside the notice that tweets in the conversation are “unavailable,” which, especially when presented in high volume, doesn’t really offer much help to a confused user.

Last year, Twitter introduced a new process for adding additional context and transparency to why an individual tweet was deleted, and it generally seems interested in making sure that conversations on the platform are both easy to follow, and easy to access and understand for users who may not be as familiar with Twitter’s behind-the-scenes machinations.

Twitter returns after an hour-long outage

After an hour of sweet freedom, the world has been returned to the grasp of Twitter.

At about 2:50 pm ET, the desktop and mobile site were down, displaying a “Something is technically wrong” error. The app was also not working. The site returned at about 3:45 pm ET, but took a few minutes to regain full functionality.

Twitter’s status page said little more than it was an “active incident.” A spokesperson for Twitter confirmed the outage but referred us to the status page.

After the site returned, Twitter said it was because of an “internal configuration change,” which it has since rolled back.

It’s not the first time Twitter’s had a hiccup in the past few weeks. The social media giant was hit by a direct message outage earlier this month. In fact, between June and July, most of the major internet companies had some form of outage, knocking themselves or other sites offline in the process.

Please tweet about how it was down and how it’s hard to tweet about how Twitter’s down when it is itself down, and the irony therein.

We’ll patiently wait to hear from Twitter about the cause of the outage.

Devin Coldewey contributed.

Instagram’s new chat sticker lets friends ask to get in on the conversation directly in Stories

Instagram has a new sticker type rolling out today that lets friends and followers instantly tap to start conversations from within Stories. The new sticker option, labelled “Chat,” will let anyone looking at a story request to join an Instagram group DM conversation tied to the post, with the original poster still getting the opportunity to actually approve the requests coming in from their friends and followers.

Instagram’s Direct Messages provide built-in one-to-one and one-to-many private messaging for users on the platform, and are one key way that the social network owned by Facebook has used to fend off, anticipate and adapt features from would-be competitor Snapchat. The company confirmed in May that it was discontinuing development of Direct, its own standalone app version of the Instagram DM feature, but its clearly still interested on iterating the core product to make it more engaging for users and better linked to Instagram’s other core sharing capabilities.

Adopting a ratings system for social media like the ones used for film and TV won’t work

Internet platforms like Google, Facebook and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.

Facebook releases community standards enforcement report

Facebook has just released its latest community standards enforcement report and the verdict is in: people are awful, and happy to share how awful they are with the world.

The latest effort at transparency from Facebook on how it enforces its community standards contains several interesting nuggets. While the company’s algorithms and internal moderators have become exceedingly good at tracking myriad violations before they’re reported to the company, hate speech, online bullying, harassment and the nuances of interpersonal awfulness still have the company flummoxed.

In most instances, Facebook is able to enforce its own standards and catches between 90% and over 99% of community standards violations itself. But those numbers are far lower for bullying, where Facebook only caught 14% of the 2.6 million instances of harassment reported; and hate speech, where the company internally flagged 65.4% of the 4.0 million moments of hate speech users reported.

By far the most common violation of community standards — and the one that’s potentially most worrying heading into the 2020 election — is the creation of fake accounts. In the first quarter of the year, Facebook found and removed 2.19 billion fake accounts. That’s a spike of 1 billion fake accounts created in the first quarter of the year.

Spammers also keep trying to leverage Facebook’s social network — and the company took down nearly 1.76 billion instances of spammy content in the first quarter.

For a real window into the true awfulness that people can achieve, there are the company’s self-reported statistics around removing child pornography and graphic violence. The company said it had to remove 5.4 million pieces of content depicting child nudity or sexual exploitation and that there were 33.6 million takedowns of violent or graphic content.

Interestingly, the areas where Facebook is the weakest on internal moderation are also the places where the company is least likely to reverse a decision on content removal. Although posts containing hate speech are among the most appealed types of content, they’re the least likely to be restored. Facebook reversed itself 152,000 times out of the 1.1 million appeals it heard related to hate speech. Other areas where the company seemed immune to argument was with posts related to the sale of regulated goods like guns and drugs.

In a further attempt to bolster its credibility and transparency, the company also released a summary of findings from an independent panel designed to give feedback on Facebook’s reporting and community guidelines themselves.

Facebook summarized the findings from the 44-page report by saying the commission validated Facebook’s approach to content moderation was appropriate and its audits well-designed “if executed as described.”

The group also recommended that Facebook develop more transparent processes and greater input for users into community guidelines policy development.

Recommendations also called for Facebook to incorporate more of the reporting metrics used by law enforcement when tracking crime.

“Law enforcement looks at how many people were the victims of crime — but they also look at how many criminal events law enforcement became aware of, how many crimes may have been committed without law enforcement knowing and how many people committed crimes,” according to a blog post from Facebook’s Radha Iyengar Plumb, head of Product Policy Research. “The group recommends that we provide additional metrics like these, while still noting that our current measurements and methodology are sound.”

Finally the report recommended a number of steps for Facebook to improve, which the company summarized below:

  • Additional metrics we could provide that show our efforts to enforce our polices such as the accuracy of our enforcement and how often people disagree with our decisions
  • Further break-downs of the metrics we already provide, such as the prevalence of certain types of violations in particular areas of the world, or how much content we removed versus apply a warning screen to when we include it in our content actioned metric
  • Ways to make it easier for people who use Facebook to stay updated on changes we make to our policies and to have a greater voice in what content violates our policies and what doesn’t

Meanwhile, examples of what regulation might look like to ensure that Facebook is taking the right steps in a way that is accountable to the countries in which it operates are beginning to proliferate.

It’s hard to moderate a social network that’s larger than the world’s most populous countries, but accountability and transparency are critical to preventing the problems that exist on those networks from putting down permanent, physical roots in the countries where Facebook operates.

Indonesia restricts WhatsApp, Facebook and Instagram usage following deadly riots

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, and posting content to Facebook, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.

Update 05/22 02:30 PDT: The original version of this post has been updated to reflect that usage of Facebook in Indonesia has also been impacted.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.