Jump to content

The spread of terror on social media: ‘It isn’t going to get a lot better than this’


The AchieVer

Recommended Posts

The AchieVer

Social media companies like Facebook and Google have been slammed in the wake of the Christchurch massacre for failing to stop the spread of violent footage posted by the shooter. 

 

Pressure is mounting on them to do more after the terrorist’s video quickly spread across the internet on Friday but former tech employees say it’s not going to get any better.

 

Yesterday, Facebook said it removed 1.5 million videos of the New Zealand shootingsincluding 1.2 million that were blocked from being posted. That implies 300,000 versions of the video were available to watch for at least short periods of time before Facebook managed to pull them down.

 

For hours after the attack, the video circulated on other popular content sharing sites YouTube and Twitter as well as lesser known video streaming sites.

Prime Minister Scott Morrison has taken aim at social media companies for not doing enough to prevent the spread of Friday’s live streamed attack. He demanded that tech giants provide assurances that they would prevent attacks from being shown online, suggesting live streaming services could be suspended.

 

Opposition leader Bill Shorten also took aim at social media sites for hosting hate speech and not being accountable for the spread of anti-social content.

 

Criticism has come from all corners, but serious questions remain about whether these sites can reliably be tasked with preventing another horrific live streamed video from being so widely circulated again.

 

‘IT ISN’T GOING TO GET A LOT BETTER’

 

These companies use a combination of algorithms, human workers and user reporting to police content. But given the huge volume of postings during an event like Christchurch it is currently an impossible task to block everything in real time.

 

Alex Stamos is a computer scientist and the former chief security officer at Facebook. The day after the massacre he took to Twitter to lament the immense difficulty faced by a company like Facebook when so many users willingly post the violating footage.

 

“Millions of people are being told online and on TV that there is a video and a document that are too dangerous for them to see, so they are looking for it in all the normal places,” he said, sharing a picture which showed a spike in Google searches for “New Zealand shooting” on Friday.

 

“So now we have tens of millions of consumers wanting something and tens of thousands of people willing to supply it, with the tech companies in between.”

Even if the company’s filtering systems were bulletproof, questions still remain about what should be allowed for legitimate reporting purposes and how to differentiate, he wrote.

 

In short, “It isn’t going to get a lot better than this.”

In fact, it will likely get worse.

 

When it comes to Facebook, others were quick to point out that recent changes announced by CEO Mark Zuckerberg to introduce encrypted messaging and ostensibly boost privacy on the platform will limit the company’s ability to pull down infringing content.

 

“End-to-end encryption prevents anyone — including us — from seeing what people share on our services,” Zuckerberg said earlier this month.

 

According to former Facebook exec Antonio Garcia Martinez, a cynic might see this as a way for Facebook to protect itself against this kind of criticism.

 

“Zuck’s recent statements about encryption, interpreted uncharitably, are a way to get out from under this content moderation curse forever, with an idealistic sheen,” he wrote on Twitter this morning.

 

“By the way, I’m told the video is still circulating on WhatsApp, and there’s nothing FB can do about it due to e2e (end-to-end encryption),” he added.

 

Tech firms have long struggled to balance their ethos of supporting free speech with the need to remove and prevent the spread of terrorist content.

 

In 2016, Google, Facebook, Twitter and Microsoft announced they had teamed up to create a database of unique digital fingerprints known as “hashes” for videos and images that promote terrorism.

 

Known as perceptual hashing, it means when one company takes down a piece of violating content, other companies can also use the hash to identify and remove the same content.

 

But like other systems designed to improve content moderation, it is imperfect and is beholden to the never-ending game of cat and mouse when users are intent on sharing content.

And it’s a problem that doesn’t look like going away any time soon.

 

Facebook CEO Mark Zuckerberg has signalled major changes to come to Facebook. Picture: Marcio Jose Sanchez

Facebook CEO Mark Zuckerberg has signalled major changes to come to Facebook. 

 

 

 

 

Source

 
Link to comment
Share on other sites


  • Replies 1
  • Views 272
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...