Disinformation

They’re getting smarter: How disinformation peddlers avoid regulation

Lorenz Pasion

This is AI generated summarization, which may have errors. For context, always refer to the full article.

They’re getting smarter: How disinformation peddlers avoid regulation

ALEJANDRO EDORIA

Disinformation peddlers who usually make direct, clear, and easy-to-understand claims, are now getting creative at avoiding Meta’s regulations
At a glance:
  • Bad actors are now applying certain techniques to get around being noticed by fact checkers. One is through intentional misspellings and code words, another is by being vague about the claims. 
  • They are also going to platforms like YouTube and Telegram which do not have the same regulations.
  • Following Rappler’s inquiry, a Meta spokesperson told Rappler that they disabled Lynn Agno’s Facebook account on February 13, 2023, due to repeated violations of their policies on disinformation and harm.

MANILA, Philippines – As Facebook tightens its regulations to curb the spread of disinformation on its platform, bad actors are getting smarter – applying new tactics to distribute their content just to get away with spreading false claims.

Facebook’s parent company, Meta, has policies to combat disinformation that are publicly available on its website. Users who violate these policies could face restrictions on content creation, monetization, and running ads on Facebook. Meta can also remove the content that violates these policies and disable accounts of users who repeatedly violate their rules.

However, Meta’s enforcement of its rules, did not change the behavior of repeat offenders in the long run.

Instead of changing behavior, bad actors who repeatedly violate Meta’s rules go to other platforms like YouTube and Telegram which do not have the same regulations and methods of enforcement.

Rappler noticed that to avoid these consequences, disinformation peddlers who usually make direct, clear, and easy-to-understand claims, are now getting creative at avoiding Meta’s regulations and are using the following tactics: 

  • Changing the spelling of words 
  • Using code words
  • Being vague about claims

These tactics make it harder for fact checkers to detect false claims and debunk them, allowing bad actors to escape penalties that Meta applies to pages and accounts found to have violated its regulations.

To address this, Meta is improving its detection capabilities to better enforce policies. 

Steps

According to Meta’s website, its policies on disinformation involve a few steps.

First is recognizing whether content that is circulating on the internet is false. To do this, Meta partnered with third-party independent fact checking organizations (like Rappler) to review viral content on its platforms.

When the reviewed content is rated as false, altered, or partly false, Meta enforces its policies. It then flags the content and warns platform’s users, telling them it is false links to the fact check explaining the rating. 

Meta also significantly reduces the distribution of the content on Facebook news feeds and Instagram feeds so that less people see it. On Instagram, Meta also filters it from Explore and hashtag pages to make it harder to find.

If a user repeatedly posts content that is rated as false, Meta significantly reduces the reach of the content that the user posts. 

For pages and domains that repeatedly share false content, Meta reduces their distribution and removes their ability to monetize and advertise.

Meta also removes content that goes against Facebook Community Standards or Instagram Community Guidelines, then it applies a strike to the Facebook or Instagram account that posted the content.

Penalties for a strike range from just a warning for the first strike up to a 30-day restriction from creating content for the fifth strike.

After five strikes, Meta may either give a user an additional 30-day restriction from creating content or remove the user’s account, depending on the severity and frequency of the user’s violations.

Meta’s takedown of Facebook user Lynn Agno’s account is an example of an account removal. Agno had been spreading false information on vaccination and COVID-19 since 2020 and was fact-checked by Rappler 22 times as of writing. Poynter, Vera Files, News5, FactRakers, and the Voice of America (VOA) News, had also fact-checked Agno several times in the past.  

Following Rappler’s inquiry, Meta told Rappler that they disabled Lynn Agno’s Facebook account on February 13, 2023, due to repeated violations of their policies on disinformation and harm.

Upon verification, Rappler found that Agno’s Facebook account has become unavailable. Rappler last saw Agno’s account active on February 10, 2023.  

Evasion techniques

Rappler observed several Facebook users apply certain techniques in an attempt to evade Facebook’s detection and censorship.

Numbers in place of letters. Before Meta’s takedown of her account, Rappler noticed Agno replacing letters with numbers in words she avoided mentioning. 

In her recent vlog captions, Agno spelled Pfizer as “PF1Z3R” and COVID-19 as “C0v1d-19,” as was the case in the caption of her video uploaded last May 17, 2022. 

Another Facebook user JIL Review TV spelled drug lord as “drag l0rd” in a post. Rappler has also fact-checked JIL Review TV numerous times for politics-related disinformation.

Code words. Aside from misspellings, disinformation peddlers also use code words to avoid directly mentioning the words or objects they actually refer to, and which could be flagged by moderators.

Rappler analyzed the context of the content where these code words were used to know their meaning. 

A glimpse at internet personality Ron Samaniego showed his use of this tactic. Samaniego had been fact-checked by Rappler, Vera Files, and News5 several times in the past. 

Samaniego used the term “vestsin” to refer to vaccines and also the term “bulutong-unggoy” to refer to monkeypox in a Facebook post published on August 4, 2022.

Disinformation peddlers also use code words to refer to key health officials and other well-known individuals whom they talk about in their posts.

An example of this is how Agno called top US infectious disease official Dr. Anthony Fauci “Fau-fau” in a video dated August 31, 2022. Fauci is the Chief Medical Advisor to US President Joe Biden and is Biden’s top adviser on the pandemic.

Top officials in the Philippines weren’t spared from this tactic. Former vice president Leni Robredo has been called “Len-len” and “Nanay Lutong” by Facebook user JIL Review TV.

Agno, Samaniego, and JIL Review TV didn’t use the code words every time they referred to the objects and individuals in their videos. However, Rappler noticed that the codes were used regularly in Agno’s vlogs. Agno usually mentioned these code words verbally in her videos but she also used them in the captions and in comments she put in her videos from time to time.

Vague product claims. Rappler noticed that Facebook users who post product claims are now more vague.

This tactic was observed by Rappler in a new Facebook post for the product Ha An Duong, for example. The product was previously fact-checked in August 2022 due to its claim that it “can cure diabetes” but more recent versions of the post no longer use the word cure, and only say “magpaalam sa diabetes” (say goodbye to diabetes).  

Question format. The same tactic is also used by political fake news peddlers, who formulate their outlandish claims in the form of questions.

Facebook user BonzTv Blog 2.0 posted a video in November 2022 with the title, “PBBM ILALABAS NA ANG MGA GINTO SA SWITZERLAND?” (PBBM will release the gold in Switzerland?)

Are these tactics successful?

The tactics that Rappler has observed bad actors to be using actually allow them wiggling room.

Misspellings and codewords are difficult to spot and detect. They do not appear in a simple search for similar claims. Content that use these techniques can only be found if one knows the bad actors who used them and how they exactly misspelled the words or used specific code words.

Code words also make the claims vague and harder to rate since the disinformation peddlers don’t categorically state the real name of the person or object they are referring to. Bad actors who use code words have deniability and can just falsely claim they are not referring to a certain person when in fact they are.

This is why it takes more time for fact checkers to check posts with code words since the full context should be understood to confirm that a code word refers to a specific person or object.

Vague claims allow bad actors to indirectly spread false claims without actually saying them. In the process, they evade Meta’s rules.

False claims formulated to look like questions are also effective. These claims make bad actors look as if they were just asking innocent questions, thus allowing them to avoid accountability. 

Rappler observed that in videos that use this technique, bad actors repeat the question multiple times while providing multiple false information to support and back up these questions and establish a claim phrased as a question as alleged fact. 

Plugging the gap

What does Meta do when it comes to people who use these techniques to evade their policies? According to Meta, they are improving their detection capabilities to better enforce their policies. 

To Meta, bad actors will always try to change tactics to evade detection, which is why their efforts are concentrated on improving their ways to detect these changes. 

“This is an adversarial space and bad actors often change their tactics to try to evade detection – our job is to continue our efforts to enforce our policies,” Meta said in an email to Rappler.  

To keep ahead of the tactics used by bad actors to evade detection, Meta said that their teams are always monitoring trends in the way people talk and behave online. 

Meta also said their team includes Filipinos who “speak local languages and are deeply familiar with the local context” and that they also work with partners who can alert them to emerging issues and provide essential context.

On its website, Meta said that it uses enforcement technology and review teams to help them detect and review “potentially violating content and accounts on Facebook and Instagram.”

Does Facebook’s method work? A study done by Science Feedback said yes, but only to a certain extent.

The Science Feedback study on Facebook’s interventions on accounts that repeatedly share disinformation and misinformation shows that Meta’s enforcement of its rules results in reduced engagement on posts that violate community standards.

However, the study found that this did not change the behavior of repeat offenders in the long run.

Must Read

Why possible loss of CrowdTangle worries fact-checkers and disinformation researchers

Why possible loss of CrowdTangle worries fact-checkers and disinformation researchers

When usual interventions don’t work, Meta permanently disables the account it found to be a repeat violator of its disinformation and harm policy, just like what it did to Agno’s account, following Rappler’s inquiry.

Whack-a-mole game

Unfortunately, if something does not work on one platform, serial disinformation peddlers just move to other platforms.

For instance, Agno herself prepared a contingency plan almost a year before Meta permanently disabled her account. Meta had temporarily suspended her account’s ability to post content several times in 2022, which can be seen as periods of inactivity on her page.

In several videos uploaded in April and May 2022, Agno said that Meta temporarily suspended her account again preventing her from uploading content on Facebook. During that time, Agno said she uploaded her videos and updated her viewers on the YouTube channel, “Lynn Agno” which now has over 11,100 subscribers.

Like Agno, Samaniego also has a YouTube channel named “ Dr. Ronald Samaniego” with 13,300 subscribers. JIL Review TV also has a YouTube channel of the same name with 163,000 subscribers. 

Other Facebook users who had been fact-checked by Rappler in the past like Solidong Kaalaman, Filipino Future, Robin Sweet Showbiz, and Showbiz Fanaticz also have YouTube channels where they post their content, too.

YouTube has its own policies against disinformation, but the platform is much more lax when it comes to implementing them. This has prompted global fact-checkers to demand action from YouTube to take effective action against disinformation.

Must Read

Global fact-checkers demand from YouTube effective action against disinformation

Global fact-checkers demand from YouTube effective action against disinformation

Despite having policies on vaccine and COVID-19 disinformation, the video streaming platform has not yet applied these on Agno, whose primary content is mostly vaccine and COVID-19 disinformation.

Agno also said in some of her Facebook videos that she has a private Telegram group where she posts her videos separately to keep her viewers updated about her latest content. 

The Reuters Institute for the Study of Journalism (RISJ) published a paper in October 2021 that showed how users exploit Telegram and its anti-censorship policies. 

Independent non-profit research organization EU Disinfo Lab published a paper in December 2022 that looked at the moderation policies of Telegram and concluded that the app’s current terms of service “overlook any reference to not allowing disinformation on the platform.”

An article published by Harvard Kennedy School said that the wide differences in disinformation policies among social media websites cause a “networked nature” of disinformation – where posts or messages banned in one platform may grow on other mainstream platforms in the form of links, quotes, or screenshots. This makes the fight against disinformation difficult at the ecosystem level of social media.

To address this, more research on content moderation at this level is needed to guide platform policy debate around implementing effective interventions to counteract disinformation, the article said.

As long as there is a wide difference in the policies on disinformation among social media websites, the fight against it will be a never-ending cycle.

Emmanuel Vincent, director of French Fact checking organization Science Feedback, told Rappler that platforms could stop serial disinformation actors by adjusting what they recommend their users to watch.

“Platforms should not take the risk of driving undue attention to misinformation by amplifying the content from serial misinformation sharers via their recommendation algorithms,” Vincent said.

To better identify serial disinformation sharers, Vincent also recommended that platforms check “all available information” on whether an owner of an account has a history of sharing disinformation, including those shared on other platforms.

“We increasingly see that misinformation sharers try and avoid breaking the rules of a given platform by inviting their followers to visit an external link or their profile on another platform with no moderation policies,” Vincent said.

Disinformation peddlers will just move to other more lax social media platforms – a game of whack-a-mole until a standard policy to combat disinformation is implemented across all platforms. – Rappler.com

2 comments

Sort by
  1. ET

    “… a standard policy to combat disinformation is implemented across all platforms” – is indeed an ideal one. The question is: When will that be realized? And even if realized, can it be sustained?

    1. JW

      The growing use of AI in the public sphere gives me hope that it will be realized sooner then later, and that the effect of disinformation won’t be so pronounced.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Avatar photo

author

Lorenz Pasion

Lorenz Pasion is a researcher at Rappler and a member of its fact-check team that debunks false claims that spread on social media.