social media platforms

FB, Twitter, TikTok, YouTube’s 2022 anti-disinfo efforts ‘full of empty promises’ – group

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

FB, Twitter, TikTok, YouTube’s 2022 anti-disinfo efforts ‘full of empty promises’ – group

FACEBOOK AND TIKTOK. Printed Facebook and TikTok logos are seen in this illustration taken February 15, 2022.

Dado Ruvic/Reuters

Media reforms group Free Press advises reporters covering the tech sector to 'take nothing from the platforms at face value'

MANILA, Philippines – US-based media reforms advocacy group Free Press found that efforts against disinformation by four major social media platforms Facebook, Twitter, YouTube, and TikTok in 2022 were “weak” and “full of empty promises.” 

The group is part of a coalition of more than 60 civil and consumer rights organizations called Change The Terms, which called on the companies this year to implement 15 priority reforms that would “fight algorithmic amplification of hate and lies, protect users across all languages and increase company transparency.”

The coalition asked for the implementation ahead of the US midterm elections, as US elections have in recent years become a test as to whether methods and policies against disinformation have improved. 

Free Press found the companies’ efforts severely lacking, sharing key findings.

All four companies have failed to provide sufficient data to show whether there are significant gaps in the application and enforcement of their policies. Exacerbating the problem, the group said, is the difficulty in keeping track, as the companies have created a “labyrinth of company commitments, announcements, and policies.”

Meta policies fully meet only two of the 15 demands: ban calls to arms, and apply third-party fact-checkers to political ads. Important to note that TikTok and Twitter both don’t allow political ads outright. TikTok meets one demand, also banning calls to arms. 

Legend: Green face – the company meets the demand in a stated policy; Orange face – the company insufficiently or incompletely references the demand in a stated policy; red face – the company fails to meet the demand; * = instances when it was impossible to assess a company’s performance given insufficient transparency

All four companies “fail to close” what they call “newsworthiness” or “public interest” exceptions which are given to politicians and other prominent users, allowing them to post something that may be false. Under the policy, the posts can be kept online, with the companies claiming that what the public figure says is newsworthy. The group calls the policy “arbitrary” and can be often just used as a get out of jail free card. 

The video platforms TikTok and YouTube don’t report “denominators” on violative videos which would give context on how many people were able to view the videos or the length of time the videos stayed up before they were eventually taken down. The group also said it is a problem when platforms report content takedowns but do not provide the complete picture.

For example, YouTube previously touted having removed more than 4 million violative videos from April to June 2022. But the platform “does not report what the ratio is to all videos that existed on the platform during that period.” Without such context, it is hard to assess the percentage of videos on YouTube are violative. 

Complete data needed to back up companies’ claims

“Although tech companies had promised to fight disinformation and hate on their platforms this fall, there is a notable gap between what the companies say they want to do and what they actually do in practice. In sum, platforms do not have sufficient policies, practices, AI or human capital in place to materially mitigate harm ahead of and during the November midterms,” Free Press said. 

“We cannot take these companies at their word. We need transparent records of their implementation of safety mechanisms and application of their own policies.” 

Assessing each company, Free Press said that while Meta’s regular announcements seem promising, “they are just that: promises.”

The group found instances of posts continuing to spread false claims on US electoral fraud, such as those targeting electoral workers, remaining on the platform and “slipping through the cracks.” The group also noted slower work on non-English disinformation.

Meta also eliminated a “Responsible Innovation” team which had civil rights experts, and combined multiple civic integrity teams, which internal sources have said was a cost-cutting move. 

The group also found false claims on US electoral fraud on TikTok, with one user repeatedly being able to rejoin even as TikTok took action. Twitter policies were found have been lacking in detail, and “there are discrepancies between Twitter election-related blog posts and Twitter policies in the Terms of Service.” 

The group also said, “YouTube has the largest gaps in policy protections. The company lacks transparency on its approach to violative content. There are also few specifics on moderation and enforcement practices (such as the existence of civic-integrity teams, moderation across languages, etc.).”

“While they claim to have crafted and enforced new policies addressing the spread of such toxic content, these claims are difficult for independent auditors to verify. The companies’ websites are tangles of contradictory policies and standards that are difficult to unravel. Reporters covering the technology sector should take nothing from the platforms at face value. Every claim must be backed by empirical evidence and a full-field view of its impact.” – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.