Tag Archives: photo sharing

Instagram says growth hackers are behind spate of fake Stories views

If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.

Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.

TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.

A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)

Instagram told us it is aware of the issue and is working on a fix.

It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).

Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )

A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.

So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.

“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”

Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)

It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.

We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.

What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?

Switching your profile to private is the only way to thwart the growth hackers, for now.

Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.

When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”

Instagram and Facebook are experiencing outages

Users reported issues with Instagram and Facebook Sunday morning.

[Update as of 12:45 p.m. pacific] Facebook says the outage affecting its apps has been resolved.

“Earlier today some people may have had trouble accessing the Facebook family of apps due to a networking issue. We have resolved the issue and are fully back up, we apologize for the inconvenience,” a Facebook company spokesperson said in a statement provided to TechCrunch.

The mobile apps wouldn’t load for many users beginning in the early hours of the morning, prompting thousands to take to Twitter to complain about the outage. #facebookdown and #instagramdown are both trending on Twitter at time of publish.

Instagram’s new chat sticker lets friends ask to get in on the conversation directly in Stories

Instagram has a new sticker type rolling out today that lets friends and followers instantly tap to start conversations from within Stories. The new sticker option, labelled “Chat,” will let anyone looking at a story request to join an Instagram group DM conversation tied to the post, with the original poster still getting the opportunity to actually approve the requests coming in from their friends and followers.

Instagram’s Direct Messages provide built-in one-to-one and one-to-many private messaging for users on the platform, and are one key way that the social network owned by Facebook has used to fend off, anticipate and adapt features from would-be competitor Snapchat. The company confirmed in May that it was discontinuing development of Direct, its own standalone app version of the Instagram DM feature, but its clearly still interested on iterating the core product to make it more engaging for users and better linked to Instagram’s other core sharing capabilities.

Facebook releases community standards enforcement report

Facebook has just released its latest community standards enforcement report and the verdict is in: people are awful, and happy to share how awful they are with the world.

The latest effort at transparency from Facebook on how it enforces its community standards contains several interesting nuggets. While the company’s algorithms and internal moderators have become exceedingly good at tracking myriad violations before they’re reported to the company, hate speech, online bullying, harassment and the nuances of interpersonal awfulness still have the company flummoxed.

In most instances, Facebook is able to enforce its own standards and catches between 90% and over 99% of community standards violations itself. But those numbers are far lower for bullying, where Facebook only caught 14% of the 2.6 million instances of harassment reported; and hate speech, where the company internally flagged 65.4% of the 4.0 million moments of hate speech users reported.

By far the most common violation of community standards — and the one that’s potentially most worrying heading into the 2020 election — is the creation of fake accounts. In the first quarter of the year, Facebook found and removed 2.19 billion fake accounts. That’s a spike of 1 billion fake accounts created in the first quarter of the year.

Spammers also keep trying to leverage Facebook’s social network — and the company took down nearly 1.76 billion instances of spammy content in the first quarter.

For a real window into the true awfulness that people can achieve, there are the company’s self-reported statistics around removing child pornography and graphic violence. The company said it had to remove 5.4 million pieces of content depicting child nudity or sexual exploitation and that there were 33.6 million takedowns of violent or graphic content.

Interestingly, the areas where Facebook is the weakest on internal moderation are also the places where the company is least likely to reverse a decision on content removal. Although posts containing hate speech are among the most appealed types of content, they’re the least likely to be restored. Facebook reversed itself 152,000 times out of the 1.1 million appeals it heard related to hate speech. Other areas where the company seemed immune to argument was with posts related to the sale of regulated goods like guns and drugs.

In a further attempt to bolster its credibility and transparency, the company also released a summary of findings from an independent panel designed to give feedback on Facebook’s reporting and community guidelines themselves.

Facebook summarized the findings from the 44-page report by saying the commission validated Facebook’s approach to content moderation was appropriate and its audits well-designed “if executed as described.”

The group also recommended that Facebook develop more transparent processes and greater input for users into community guidelines policy development.

Recommendations also called for Facebook to incorporate more of the reporting metrics used by law enforcement when tracking crime.

“Law enforcement looks at how many people were the victims of crime — but they also look at how many criminal events law enforcement became aware of, how many crimes may have been committed without law enforcement knowing and how many people committed crimes,” according to a blog post from Facebook’s Radha Iyengar Plumb, head of Product Policy Research. “The group recommends that we provide additional metrics like these, while still noting that our current measurements and methodology are sound.”

Finally the report recommended a number of steps for Facebook to improve, which the company summarized below:

  • Additional metrics we could provide that show our efforts to enforce our polices such as the accuracy of our enforcement and how often people disagree with our decisions
  • Further break-downs of the metrics we already provide, such as the prevalence of certain types of violations in particular areas of the world, or how much content we removed versus apply a warning screen to when we include it in our content actioned metric
  • Ways to make it easier for people who use Facebook to stay updated on changes we make to our policies and to have a greater voice in what content violates our policies and what doesn’t

Meanwhile, examples of what regulation might look like to ensure that Facebook is taking the right steps in a way that is accountable to the countries in which it operates are beginning to proliferate.

It’s hard to moderate a social network that’s larger than the world’s most populous countries, but accountability and transparency are critical to preventing the problems that exist on those networks from putting down permanent, physical roots in the countries where Facebook operates.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

Facebook says it filed a US lawsuit to shut down a follower-buying service in New Zealand

Facebook is cracking down on services that promise to help Instagram users buy themselves a large following on the photo app. The social network said today that it has filed a lawsuit against a New Zealand-based company that operates one such ‘follower-buying service.’

The suit is in a U.S. court and is targeting the three individuals running the company, which has not been named.

“The complaint alleges the company and individuals used different companies and websites to sell fake engagement services to Instagram users. We previously suspended accounts associated with the defendants and formally warned them in writing that they were in violation of our Terms of Use, however, their activity persisted,” Facebook wrote in a post.

We were not initially able to get a copy of the lawsuit, but have asked Facebook for further details.

This action comes months after a TechCrunch expose identified 17 follower-buyer services that were using Instagram’s own advertising network to peddle their wares to users of the service.

Instagram responded by saying it had removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services that we reported were violating its policies. However, just one day later, TechCrunch found advertising from two of the companies Instagram, while a further five were found to be paying to promote policy-violating follower-growth services.

Facebook has stepped up its efforts to crack down on “inauthentic behavior” on its platforms in recent months. That’s included removing accounts and pages from Facebook and Instagram in countries that include India, Pakistan, the Philippines, the U.K, Romania, Iran, Russia, Macedonia and Kosovo this year. Higher-profile action has included the suspension of removal of UK far-right activist Tommy Robinson from Facebook and in Myanmar, where Facebook has been much-criticized, the company banned four armed groups.

Facebook now says its password leak affected ‘millions’ of Instagram users

Facebook has confirmed its password-related security incident last month now affects “millions” of Instagram users, not “tens of thousands” as first thought.

The social media giant confirmed the new information in its updated blog post, first published on March 21.

“We discovered additional logs of Instagram passwords being stored in a readable format,” the company said. “We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.”

“Our investigation has determined that these stored passwords were not internally abused or improperly accessed,” the updated post said, but the company still has not said how it made that determination.

The social media giant did not say how many millions were affected, however.

Last month, Facebook admitted it had inadvertently stored “hundreds of millions” of user account passwords in plaintext for years, said to have dated as far back as 2012. The company said the unencrypted passwords were stored in logs accessible to some 2,000 engineers and developers. The data was not leaked outside of the company, however. Facebook still hasn’t explained how the bug occurred.

Facebook posted the update at 10am ET — an hour before the Special Counsel’s report into Russian election interference was set to be published.

When reached, spokesperson Liz Bourgeois said Facebook does not have “a precise number” yet to share, and declined to say exactly when the additional discovery was made.

Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

Flip the “days since last Facebook security incident” back to zero.

Facebook confirmed Thursday in a blog post, prompted by a report by cybersecurity reporter Brian Krebs, that it stored “hundreds of millions” of account passwords in plaintext for years.

The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.

Krebs said the bug dated back to 2012.

“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.

Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.

Krebs said as many as 600 million users could be affected — about one-fifth of the company’s 2.7 billion users, but Facebook has yet to confirm the figure.

Facebook also didn’t say how the bug came to be. Storing passwords in readable plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.

Twitter and GitHub were hit by similar but independent bugs last year. Both companies said passwords were stored in plaintext and not scrambled.

It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.

It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.

The Irish data protection office, which covers Facebook’s European operations, said the company “informed us of this issue” and the regulator is “currently seeking further information.”

PicsArt hits 130 million MAUs as Chinese flock to its photo editing app

If you’re like me, who isn’t big on social media, you’d think that the image filters that come inside most apps will do the job. But for many others, especially the younger crowd, making their photos stand out is a huge deal.

The demand is big enough that PicsArt, a rival to filtering companies VSCO and Snapseed, recently hit 130 million monthly active users worldwide, roughly a year after it amassed 100 million MAUs. Like VSCO, PicsArt now offers video overlays though images are still its focus.

Nearly 80 percent of PicsArt’s users are under the age of 35 and those under 18 are driving most of its growth. The “Gen Z” (the generation after millennials) users aren’t obsessed with the next big, big thing. Rather, they pride themselves on having niche interests, be it K-pop, celebrities, anime, sci-fi or space science, topics that come in the form of filters, effects, stickers and GIFs in PicsArt’s content library.

“PicsArt is helping to drive a trend I call visual storytelling. There’s a generation of young people who communicate through memes, short-form videos, images and stickers, and they rarely use words,” Tammy Nam, who joined PicsArt as its chief operating officer in July, told TechCrunch in an interview.

PicsArt has so far raised $45 million, according to data collected by Crunchbase. It picked up $20 million from a Series B round in 2016 to grow its Asia focus and told TechCrunch that it’s “actively considering fundraising to fuel [its] rapid growth even more.”

picsart

PicsArt wants to help users stand out on social media, for instance, by virtually applying this rainbow makeup look on them. / Image: PicsArt via Weibo

The app doubles as a social platform, although the use case is much smaller compared to the size of Instagram, Facebook and other mainstream social media products. About 40 percent of PicsArt’s users post on the app, putting it in a unique position where it competes with the social media juggernauts on one hand, and serving as a platform-agnostic app to facilitate content creation for its rivals on the other.

What separates PicsArt from the giants, according to Nam, is that people who do share there tend to be content creators rather than passive consumers.

“On TikTok and Instagram, the majority of the people there are consumers. Almost 100 percent of the people on PicsArt are creating or editing something. For many users, coming on PicsArt is a built-in habit. They come in every week, and find the editing process Zen-like and peaceful.”

Trending in China

Most of PicsArt’s users live in the United States, but the app owes much of its recent success to China, its fastest growing market with more than 15 million MAUs. The regional growth, which has been 10-30 percent month-over-month recently, appears more remarkable when factoring in PicsArt’s zero user acquisition expense in a crowded market where pay-to-play is a norm for emerging startups.

“Many larger companies [in China] are spending a lot of money on advertising to gain market share. PicsArt has done zero paid marketing in China,” noted Nam.

Screenshot: TikTok-related stickers from PicsArt’s library

When people catch sight of an impressive image filtering effect online, many will inquire about the toolset behind it. Chinese users find out about the Armenian startup from photos and videos hashtagged #PicsArt, not different from how VSCO gets discovered from #vscocam on Instagram. It’s through such word of mouth that PicsArt broke into China, where users flocked to its Avengers-inspired disappearing superhero effect last May when the film was screening. China is now the company’s second largest market by revenue after the U.S.

Screenshot: PicsArts lets users easily apply the Avengers dispersion effect to their own photos

A hurdle that all media apps see in China is the country’s opaque guidelines on digital content. Companies in the business of disseminating information, from WeChat to TikTok, hire armies of content moderators to root out what the government deems inappropriate or illegal. PicsArt says it uses artificial intelligence to sterilize content and keeps a global moderator team that also keeps an eye on its China content.

Despite being headquartered in Silicon Valley, PicsArt has placed its research and development center in Armenia, home to founder Hovhannes Avoyan. This gives the startup access to much cheaper engineering talents in the country and neighboring Russia compared to what it can hire in the U.S. To date, 70 percent of the company’s 360 employees are working in engineering and product development (50 percent of whom are female), an investment it believes helps keep its creative tools up to date.

Most of PicsArt’s features are free to use, but the firm has also looked into getting paid. It rolled out a premium program last March that gives users more sophisticated functions and exclusive content. This segment has already leapfrogged advertising to be PicsArt’s largest revenue source, although in China, its budding market, paid subscriptions have been slow to come.

picsart 1

PicsArt lets users do all sorts of creative work, including virtually posing with their idol. / Image: PicsArt via Weibo

“In China, people don’t want to pay because they don’t believe in the products. But if they understand your value, they are willing to pay, for example, they pay a lot for mobile games,” said Jennifer Liu, PicsArt China’s country manager.

And Nam is positive that Chinese users will come to appreciate the app’s value. “In order for this new generation to create really differentiated content, become influencers, or be more relevant on social media, they have to do edit their content. It’s just a natural way for them to do that.”

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.