Social media platforms attempt to moderate spread of New Zealand shooting video

Kyle Chua

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Social media platforms attempt to moderate spread of New Zealand shooting video

AFP

The incident has once again called into question major social media platforms’ ability to police harmful content

MANILA, Philippines – Major social media platforms are under scrutiny after disturbing footage of the Christchurch, New Zealand attacks surfaced and quickly spread online.

One of the gunmen livestreamed himself  on Facebook walking into the mosque and opening fire. Footage from this stream later found its way into other platforms such as Instagram, Twitter, YouTube, and Reddit.

These platforms immediately took efforts to moderate its spread. However, given the viral nature of social media, videos were still easily searched and viewed hours after the incident transpired.

Buzzfeed reporter Ryan Mac, for instance, noted that YouTube’s algorithm and moderation team flagged the videos as sensitive, but could still be viewed after consenting.

“Please know we are working vigilantly to remove any violent footage,” YouTube tweeted.

Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said that they have already suspended the shooter’s Facebook and Instagram accounts and have banned the video from the platforms, as well as remove any support or praise for the crime as as soon as they are made aware.

Twitter similarly said that they have suspended an account related to the shooting and are working to completely remove the video from the platform.

As of writing, YouTube, Facebook, and Twitter, have reportedly wiped most of the copies of the video from their platforms – though when exactly they managed to take it down and how much time it took remains unclear.

Reddit, meanwhile, banned forums named “gore” and “watchpeopledie” where the videos were posted and commented upon by users.

“Any content containing links to the video stream are being removed in accordance with our site-wide policy,” the popular message boards platform told The Washington Post  in a statement.

The incident has once again called into question major social media platforms’ ability to police harmful content.

“While Google, YouTube, Facebook and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization told CNN.

Facebook’s artificial intelligence tools and human moderation team allegedly failed to detect the video during the time it was streaming and was only alerted to it by New Zealand officials.

CNN legal enforcement analyst Steve Moore said that the video could inspire others to do the same.

“What I would tell the public is this: Do you want to help terrorists? Because if you do, sharing this video is exactly how you do it,” Moore said.

New Zealand police have also asked social media users to stop sharing footage of the incident.

The problem is, as with everything that gets posted on the Internet, it’s highly likely that people have already made copies and can post it again in different sites or in a much later time. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!