Facebook to crack down on groups spreading misinformation

Agence France-Presse

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Facebook to crack down on groups spreading misinformation
The leading social network indicates it will be tougher on inappropriate content in groups, which may not be seen by the public but which can circulate hoaxes and promote abusive or violent actions

SAN FRANCISCO, USA – Facebook on Wednesday, April 10, ramped up its battle against misinformation, taking aim at groups spreading lies and adding “trust” indicators to news feeds.

Moves outlined by Facebook vice president of integrity Guy Rosen were described as part of a strategy launched 3 years ago to “remove, reduce and inform” when it comes for troublesome content posted at the leading social network’s family of services.

“This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and  informing people with additional information so they can choose what to click, read or share,” Rosen said.

An array of updates included cracking down on misbehaving groups and those who run them, as well as making it harder to impersonate others.

The leading social network indicated it will be tougher on inappropriate content in groups, which may not be seen by the public but which can circulate hoaxes and promote abusive or violent actions.

When reviewing groups to decide whether they should be taken down, Facebook will more closely scrutinize what posts are approved by their administrators and which are rejected to determine whether social network standards are being violated.

Facebook will als add a “group quality” feature that provides an overview of content that has been flagged, removed or found to be false information, according to Rosen.

Starting Wednesday, if people in a group repeatedly share content deemed to be false by independent fact-checkers, Facebook will reduce that group’s overall news feed distribution, Rosen said.

The internet titan also launched a collaboration with outside experts to find more ways to quickly fight misinformation.

An idea Facebook has been exploring since 2017 involves enlisting members of the social network pinpointing journalistic sources to corroborate or contradict online content.

Facebook added a section to its Community Standards website where people can track updates made by the social network.

“Over the last two years, we’ve focused heavily on reducing misinformation on Facebook,” Rosen said.

The “trust” indicators to be added to news feeds are developed  by a consortium of news organizations known as the Trust Project – which offer information on a news organization’s ethics and other standards for fairness and accuracy, according to Facebook.

Facebook also said it would seek to stop impersonations by bringing is “verified badge” to Messenger. 

“This tool will help people avoid scammers that pretend to be high-profile people by providing a visible indicator of a verified account,” Rosen said. Messenger is also adding the Forward Indicator which “lets someone know if a message they received was forwarded by the sender.”

Reducing spread of content under review, Instagram changes 

Facebook also said that fake news can spread very quickly, and it can spread while the questionable content is undergoing review. To combat this, Facebook says it is working on a tweak that will temporarily reduce the distribution of flagged content while the review is still pending, which will hopefully prevent it from going viral. The change has begun in the US and is in testing elsewhere, with Facebook not specifying which regions or countries. 

Facebook will also start to show background information about fact-checked images, starting in the US, through the Context Button. Originally launched in April 2018, and now available globally, the button provides more information about the publishers and the articles in the News Feed, so they have readily accessibly information that they may help them decide if they’re trustworthy or not.

Now, images shared on Facebook will also start to carry similar information including “additional details about the Page, fact-checking articles about the image, and where the image is being shared.” 

Facebook’s fact-checking efforts extend to Instagram too. Images given the “false rating” on Facebook may now also be filtered from Instagram and its Explore feed and hashtag pages. Posts that may seem inappropriate but “do not go against Instagram’s community guidelines” may also see a reduction in the Explore feed and hashtag pages.

“For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages,” Facebook says.

The company says they are actively working on more measures to combat potential misinformation on Instagram. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!