Facebook

Facebook AI to sort content for human moderators

Kyle Chua
Facebook AI to sort content for human moderators

FACEBOOK. In this file illustration photo taken on March 25, 2020, a Facebook app logo is displayed on a smartphone in Arlington, Virginia

Photo by Olivier Douliery/AFP

Posts are prioritized for review based on three criteria: their virality, severity, and the likelihood that they’ll violate the site’s policies

Facebook announced on Friday, November 13, that it’s now employing the help of artificial intelligence (AI) to prioritize posts that its human content moderators go through. 

While the social media giant has used technology in its content moderation practices before, this new AI implementation reportedly makes it easier to effectively deal with violating content and bad actors on the site. 

Before, Facebook’s human review team went through posts chronologically as they were reported by the platform’s users. Now, with the new AI, posts are sorted and prioritized for review based on three criteria: their virality, severity, and the likelihood that they’ll violate the site’s policies. 

Virality looks at how much a potentially dangerous post is being shared or liked. Those with more traction will be prioritized over those with little to no views or shares. 

Meanwhile, severity is related to content that can cause real-world harm. For instance, terrorism, suicide and self-injury, and child exploitation will be prioritized over spam. 

Lastly, the AI will also check on the likelihood that content violates Facebook’s policies by identifying signals similar to other violating content in the past. 

As to how the AI works, Facebook Community Integrity software engineer Chris Palow explained that it uses a pre-existing system, called “whole post integrity embeddings.”

What this system does is it combines all the elements of a post, from the image to the text to the user who posted it, and analyzes it as a whole, bringing context and allowing for better judgment calls. 

Despite this, Palow noted that the algorithm is not perfect and still struggles in topics that require a lot more context than what can be analyzed, such as bullying, for instance. 

“It’s important to know that all content violations will receive some level of review. Now, we’re just using our system to prioritize and make this better,” Ryan Barnes, Facebook’s Product Manager for Community Integrity pointed out in a media briefing on Tuesday, November 17.

She added they will be using automation more moving forward to lighten the load of the over 15,000 human moderators Facebook has employed around the world.

Facebook moderators review content that has been flagged by the AI or reported by users and decide whether they violate site’s policies or not. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.