Tag Archives: operating systems

Twitter blocks state-controlled media outlets from advertising on its social network

Twitter is now blocking state-run media outlets from advertising on its platform.

The new policy was announced just hours after the company identified an information operation involving hundreds of accounts linked to China as part of an effort to “sow political discord” around events in Hong Kong after weeks of protests in the region. Over the weekend more than 1 million Hong Kong residents took to the streets to protest what they see as an encroachment by the mainland Chinese government over their rights.

State-funded media enterprises that do not rely on taxpayer dollars for their financing and don’t operate independently of the governments that finance them will no longer be allowed to advertise on the platform, Twitter said in a statement. That leaves a big exception for outlets like the Associated Press, the British Broadcasting Corp., Public Broadcasting Service and National Public Radio, according to reporting from BBC reporter, Dave Lee.

The affected accounts will be able to use Twitter, but can’t access the company’s advertising products, Twitter said in a statement.

“We believe that there is a difference between engaging in conversation with accounts you choose to follow and the content you see from advertisers in your Twitter experience which may be from accounts you’re not currently following. We have policies for both but we have higher standards for our advertisers,” Twitter said in its statement.

The policy applies to news media outlets that are financially or editorially controlled by the state, Twitter said. The company said it will make its policy determinations on the basis of media freedom and independence, including editorial control over articles and video, the financial ownership of the publication, the influence or interference governments may exert over editors, broadcasters and journalists, and political pressure or control over the production and distribution process.

Twitter said the advertising rules wouldn’t apply to entities that are focused on entertainment, sports or travel, but if there’s news in the mix, the company will block advertising access.

Affected outlets have 30 days before they’re removed from Twitter and the company is halting all existing campaigns.

State media has long been a source of disinformation and was cited as part of the Russian campaign to influence the 2016 election. Indeed, Twitter has booted state-financed news organizations before. In October 2017, the company banned Russia Today and Sputnik from advertising on its platform (although a representative from RT claimed that Twitter encouraged it to advertise ahead of the election).

 

Instagram and Facebook are experiencing outages

Users reported issues with Instagram and Facebook Sunday morning.

[Update as of 12:45 p.m. pacific] Facebook says the outage affecting its apps has been resolved.

“Earlier today some people may have had trouble accessing the Facebook family of apps due to a networking issue. We have resolved the issue and are fully back up, we apologize for the inconvenience,” a Facebook company spokesperson said in a statement provided to TechCrunch.

The mobile apps wouldn’t load for many users beginning in the early hours of the morning, prompting thousands to take to Twitter to complain about the outage. #facebookdown and #instagramdown are both trending on Twitter at time of publish.

Twitter to attempt to address conversation gaps caused by hidden tweets

Twitter’s self-service tools when it comes to blocking content you don’t want to see, as well as a growing tendency for users to delete a lot of the content they post, is making some of the conversations on the platform look like Swiss cheese. The company says it will introduce added “context” on content that’s unavailable in conversations in the next few weeks, however, to help make these gaps at least less mystifying.

There are any number of reasons why tweets in a conversation you stumble upon might not be viewable, including that a poster has a private account, that the tweet was taken down due to a policy violation, that it was deleted after the fact or that specific keywords are muted by a user and present in those posts.

Twitter’s support account notes that the fix will involve providing “more context” alongside the notice that tweets in the conversation are “unavailable,” which, especially when presented in high volume, doesn’t really offer much help to a confused user.

Last year, Twitter introduced a new process for adding additional context and transparency to why an individual tweet was deleted, and it generally seems interested in making sure that conversations on the platform are both easy to follow, and easy to access and understand for users who may not be as familiar with Twitter’s behind-the-scenes machinations.

Twitter returns after an hour-long outage

After an hour of sweet freedom, the world has been returned to the grasp of Twitter.

At about 2:50 pm ET, the desktop and mobile site were down, displaying a “Something is technically wrong” error. The app was also not working. The site returned at about 3:45 pm ET, but took a few minutes to regain full functionality.

Twitter’s status page said little more than it was an “active incident.” A spokesperson for Twitter confirmed the outage but referred us to the status page.

After the site returned, Twitter said it was because of an “internal configuration change,” which it has since rolled back.

It’s not the first time Twitter’s had a hiccup in the past few weeks. The social media giant was hit by a direct message outage earlier this month. In fact, between June and July, most of the major internet companies had some form of outage, knocking themselves or other sites offline in the process.

Please tweet about how it was down and how it’s hard to tweet about how Twitter’s down when it is itself down, and the irony therein.

We’ll patiently wait to hear from Twitter about the cause of the outage.

Devin Coldewey contributed.

Instagram’s new chat sticker lets friends ask to get in on the conversation directly in Stories

Instagram has a new sticker type rolling out today that lets friends and followers instantly tap to start conversations from within Stories. The new sticker option, labelled “Chat,” will let anyone looking at a story request to join an Instagram group DM conversation tied to the post, with the original poster still getting the opportunity to actually approve the requests coming in from their friends and followers.

Instagram’s Direct Messages provide built-in one-to-one and one-to-many private messaging for users on the platform, and are one key way that the social network owned by Facebook has used to fend off, anticipate and adapt features from would-be competitor Snapchat. The company confirmed in May that it was discontinuing development of Direct, its own standalone app version of the Instagram DM feature, but its clearly still interested on iterating the core product to make it more engaging for users and better linked to Instagram’s other core sharing capabilities.

Facebook releases community standards enforcement report

Facebook has just released its latest community standards enforcement report and the verdict is in: people are awful, and happy to share how awful they are with the world.

The latest effort at transparency from Facebook on how it enforces its community standards contains several interesting nuggets. While the company’s algorithms and internal moderators have become exceedingly good at tracking myriad violations before they’re reported to the company, hate speech, online bullying, harassment and the nuances of interpersonal awfulness still have the company flummoxed.

In most instances, Facebook is able to enforce its own standards and catches between 90% and over 99% of community standards violations itself. But those numbers are far lower for bullying, where Facebook only caught 14% of the 2.6 million instances of harassment reported; and hate speech, where the company internally flagged 65.4% of the 4.0 million moments of hate speech users reported.

By far the most common violation of community standards — and the one that’s potentially most worrying heading into the 2020 election — is the creation of fake accounts. In the first quarter of the year, Facebook found and removed 2.19 billion fake accounts. That’s a spike of 1 billion fake accounts created in the first quarter of the year.

Spammers also keep trying to leverage Facebook’s social network — and the company took down nearly 1.76 billion instances of spammy content in the first quarter.

For a real window into the true awfulness that people can achieve, there are the company’s self-reported statistics around removing child pornography and graphic violence. The company said it had to remove 5.4 million pieces of content depicting child nudity or sexual exploitation and that there were 33.6 million takedowns of violent or graphic content.

Interestingly, the areas where Facebook is the weakest on internal moderation are also the places where the company is least likely to reverse a decision on content removal. Although posts containing hate speech are among the most appealed types of content, they’re the least likely to be restored. Facebook reversed itself 152,000 times out of the 1.1 million appeals it heard related to hate speech. Other areas where the company seemed immune to argument was with posts related to the sale of regulated goods like guns and drugs.

In a further attempt to bolster its credibility and transparency, the company also released a summary of findings from an independent panel designed to give feedback on Facebook’s reporting and community guidelines themselves.

Facebook summarized the findings from the 44-page report by saying the commission validated Facebook’s approach to content moderation was appropriate and its audits well-designed “if executed as described.”

The group also recommended that Facebook develop more transparent processes and greater input for users into community guidelines policy development.

Recommendations also called for Facebook to incorporate more of the reporting metrics used by law enforcement when tracking crime.

“Law enforcement looks at how many people were the victims of crime — but they also look at how many criminal events law enforcement became aware of, how many crimes may have been committed without law enforcement knowing and how many people committed crimes,” according to a blog post from Facebook’s Radha Iyengar Plumb, head of Product Policy Research. “The group recommends that we provide additional metrics like these, while still noting that our current measurements and methodology are sound.”

Finally the report recommended a number of steps for Facebook to improve, which the company summarized below:

  • Additional metrics we could provide that show our efforts to enforce our polices such as the accuracy of our enforcement and how often people disagree with our decisions
  • Further break-downs of the metrics we already provide, such as the prevalence of certain types of violations in particular areas of the world, or how much content we removed versus apply a warning screen to when we include it in our content actioned metric
  • Ways to make it easier for people who use Facebook to stay updated on changes we make to our policies and to have a greater voice in what content violates our policies and what doesn’t

Meanwhile, examples of what regulation might look like to ensure that Facebook is taking the right steps in a way that is accountable to the countries in which it operates are beginning to proliferate.

It’s hard to moderate a social network that’s larger than the world’s most populous countries, but accountability and transparency are critical to preventing the problems that exist on those networks from putting down permanent, physical roots in the countries where Facebook operates.

Indonesia restricts WhatsApp, Facebook and Instagram usage following deadly riots

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, and posting content to Facebook, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.

Update 05/22 02:30 PDT: The original version of this post has been updated to reflect that usage of Facebook in Indonesia has also been impacted.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

After criticism over moderator treatment, Facebook raises wages and boosts support for contractors

Facebook has been repeatedly (and rightly) hammered for its treatment of the content moderators who ensure the site doesn’t end up becoming a river of images, videos and articles embodying the worst of humanity.

Those workers, and the hundreds (if not thousands) of other contractors Facebook employs to cook food, provide security and provide transportation for the social media giant’s highly compensated staff, are getting a little salary boost and a commitment to better care for the toll these jobs can take on some workers.

“Today we’re committing to pay everyone who does contract work at Facebook in the US a wage that’s more reflective of local costs of living,” the company said in a statement. “And for those who review content on our site to make sure it follows our community standards, we’re going even further. We’re going to provide them a higher base wage, additional benefits, and more supportive programs given the nature of their jobs.”

Contractors in the U.S. were being paid a $15 minimum wage, received 15 paid days off for holidays, sick time and vacation; and received a $4,000 new child benefit for parents that don’t receive paid leave. Since 2016, Facebook also required employees assigned to the company to be provided with comprehensive healthcare.

Now, it’s boosting those wages in San Francisco, Washington, New York and the San Francisco Bay Area to a $20 minimum wage, and $18 in Seattle.

“After reviewing a number of factors including third-party guidelines, we’re committing to a higher standard that better reflects local costs of living,” the company said. “We’ll be implementing these changes by mid-next year and we’re working to develop similar standards for other countries.”

Those raises apply to contractors that don’t work on content moderation. For contractors involved in moderation, the company committed to a $22 per hour minimum wage in the Bay Area, New York and Washington; $20 per-hour in Seattle; and $18 per hour in other metro areas outside the U.S.

Facebook also said it will institute a similar program for international standards going forward. That’s important, as a bulk of the company’s content moderation work is actually done overseas, in places like the Philippines.

Content moderators will also have access to “ongoing well-being and resiliency training.” Facebook also said it was adding preferences to let reviewers customize how they want to view content — including an option to blur graphic images by default before reviewing them. Facebook will also provide around-the-clock on-site counseling, and will survey moderators at partner sites about what reviewers actually need.

Last month, the company said it convened its first vendor partner summit at its Menlo Park, Calif. offices and is now working to standardize contracts with its global vendors. To ensure that vendors are meeting their commitments, the company is going to hold unannounced onsite checks and a biannual audit and compliance program for content review teams.

Facebook takes its Portal international, adds WhatsApp, Facebook Live support

At its F8 developer conference, Facebook today announced that it is launching its Portal video chat hardware internationally by bringing it to Canada and a select number of European countries, too. This rollout will begin in June in Canada, with Europe following this fall. In the U.S., Portal sells for $199 and $349, depending on the screen size. The company did not announce international pricing.

Portal originally only launched in the U.S. As the company’s co-founder and CEO Mark Zuckerberg noted during his F8 keynote today, the service has done better than the company expected. Given that it launched right during some of Facebook’s biggest privacy scandals, that’s a bit of a surprise, though we don’t know what Facebook’s own expectations were, of course, or how many of the device it has sold.

In addition, Zuckerberg announced that its WhatsApp messenger is also coming to Portal. That means end-to-end encrypted video chats are coming to Portal, something that should reduce some of the privacy concerns around the device. “Now, you can be sure that when you’re having conversations with your friends and family, everything stays between you,” Zuckerberg said, though most current users probably hope that this has always been the case, even when using Portal without WhatsApp .

Later this summer, Portal will also get new AR effects and a story-reading mode. Starting today, Portal can also display photos from Instagram, and Facebook is building a mobile app that lets you display photos on the Portal. You can do that today, but it’s significantly more complicated because you need to create a private album just for Portal.

Coming soon, too, is the ability to send private video messages and Facebook Live support.