At a glance:
- Misinformation and disinformation often spread unchecked in messaging apps because these are private spaces, where trending content often evades detection and fact checking.
- The real sources of content in messaging apps are difficult to track but these are often passed on as content from friends and family thus forcing authorities, experts, and newsgroups to compete in terms of credibility and trust.
- Total messaging on WhatsApp and Facebook Messenger, the world’s two most used messaging apps, increased by more than 50% in March 2020, amid the COVID-19 pandemic.
- Despite these concerns, Facebook’s fact-checking efforts are focused on publicly available posts, allowing misinformation and disinformation to spread unchecked on Messenger.
Two weeks into 2020, a rumor spread through messaging services claiming that a case of Severe Acute Respiratory Syndrome (SARS) was confirmed in the Shangri-La Plaza mall in Mandaluyong City.
It was cause for concern: Weeks earlier, health authorities from Wuhan, China confirmed 44 cases of what was then a mystery disease, now known as COVID-19. It turned out to be false.
The claim, which was submitted to Rappler for verification, became the first of thousands of claims about COVID-19 fact checked by members of the Coronavirus Facts Alliance, a project of the International Fact-Checking Network (IFCN) at Poynter.
As of January 26, 2021, alliance members, which include fact-check organizations from more than 70 countries, have already fact checked 10,742 COVID-19 related claims. Of these, 1,352, or 13%, are from messaging apps or text messages. Many more rumors could have gone undetected because they are passed along privately from one chat room to another.
As it is, misinformation and disinformation on apps like WhatsApp, Messenger, and Viber – 3 of the most popular in the Philippines – have been a challenge for years now for both the companies that own them and independent fact checkers.
Misinformation is misleading content shared with no intent to harm, while| disinformation is shared with a deliberate intent to deceive or harm.
The spread of wrong information on these apps is a concern because their use has been growing exponentially. Since 2014, the number of users of the world’s top 2 messaging apps – WhatsApp and Messenger – has quadrupled globally. This grew further during the pandemic. In March 2020, Facebook reported that total messaging in “countries hit hardest by the virus” increased by more than 50% that month.
WhatsApp and Messenger are both owned by Facebook. Unfortunately, while the social media giant has initiated numerous efforts to curb both misinformation and disinformation within its social media properties, much remains to be done in relation to its messaging apps.
‘You speak in a void’
What makes fact checking in messaging apps so difficult?
Unlike fact checking publicly available videos or posts, which can be found using social media monitoring tools, these messages need to be sent to fact checkers directly for verification. On platforms like Viber and WhatsApp, they’re also protected by end-to-end encryption, which allows users to share text and media without fear of surveillance.
For IFCN Associate Director Cristina Tardáguila, the lack of data is a challenge, both in terms of surfacing content to fact check and knowing whether stories are reaching an intended target. Tardáguila is also the founder of the Brazilian fact-checking platform Agência Lupa and was one of the authors of a study on misinformation on WhatsApp in relation to the 2018 Brazilian presidential elections.
“We have no idea what’s trending on WhatsApp. So it’s really, really hard to decide what you should fact check because you have no idea [about] the viralization of a certain topic,” she told Rappler during a call in October. Fact checkers avoid writing about claims that aren’t viral, lest they inadvertently amplify it.
On public channels, fact checkers can observe how many people like, share, or comment on their stories, which gives them an idea of the number of people they are reaching, Tardáguila said. “In WhatsApp, you speak in a void, right? You’re talking to, maybe just one person? You have no idea how many people will actually read, like, dislike, or comment [on it]. It’s really a weird feeling.”
At the moment, the only way fact checkers spot misinformation and disinformation in messaging apps is through tiplines. Claims that Rappler eventually fact checked were typically screenshots either emailed by concerned netizens for verification or reposted on the main Facebook platform.
Authority’s competition: ‘a friend of a friend’
One challenge that public health communicators, journalists, and fact checkers face when it comes to both misinformation and disinformation in forwarded messages are friends and family, who, according to studies, people tend to trust more than public sources.
According to Kantar Media, participants in a 2018 study regarded WhatsApp as “safer,” and the friend sharing news, as an endorser. This makes information shared through the chat app more trustworthy than something on the Facebook newsfeed.
Studies presented at the online fact checking conference Global Fact 7 in July 2020, said that people tend to believe their friends when verifying information on social media and that some messages on WhatsApp are influential because they come from friends and family.
Who the message is from is also something people in marketing take into consideration. For instance, a fitness brand would choose an ambassador who’s already fit and leads a healthy lifestyle, because he or she would be more credible as someone supporting the product.
“I think the source [of the mis- or disinformation] also matters… because that’s how we do it with brands also; it’s why influencer marketing works. If the source sounds even remotely credible, it will spread like wildfire,” Andrea*, who works for an international tech and marketing firm, told Rappler in a phone interview in August.
The messages that Rappler has fact checked often cite friends of friends or family members. For example, the supposed SARS case in January 2020 was “from a friend of KT.” A rumor that spread in April 2020 about raiding hospitals for personal protective equipment was supposedly verified by “a friend who is the child of the CEO of [The Medical City],” a hospital in Pasig. Another chain message from January 2020 informed co-workers that the sender’s manager “confirmed cases of COVID-19” at PBCom Tower in Makati.
What this means is that newsgroups and fact checkers, and even public sources such as the Department of Health (DOH) and the World Health Organization (WHO) must compete with content shared by “friends,” whether they’re real or, in the case of forwarded messages, complete strangers. The WHO has COVID-19 information centers on Messenger, WhatsApp, and Viber. DOH has one on Viber.
“Establishing authority becomes a competition,” according to Jed Dalangin, senior manager for Product and Experience Development at Certified Digital Marketer (CDM), a company that gives digital marketing training to businesses and individuals. Dalangin spoke with Rappler on the phone in August 2020.
Original source: unknown
While messages are packaged to make it appear that the “source” of a message is somebody known to the sender, in reality, many messages have been forwarded so many times that tracing the original source is difficult.
As of early November 2020, as evidenced by the video below, users could still forward content on Messenger more than 5 times, despite pronouncements by Facebook to the contrary in September.
The message seen being forwarded in the video is a claim that was shared with Rappler through Messenger in April 2020.
Rappler had discussed this lack of limits with Facebook’s Communications team between September and October 2020.
Forward limits were implemented on Messenger in the Philippines 3 months after their initial announcement, in December.
With the limit, users are able to forward a message to 5 people at once before a warning is flashed. After they acknowledge the limit, they can continue forwarding the same message. The label on a forwarded message indicates that a friend has “forwarded a video” or link, but not that it has been frequently forwarded, which is the case on WhatsApp.
Rappler debunked the claim, which was also debunked by other 3rd party fact checkers, including the Spanish Maldia.es and the UK-based Full Fact. The actual video clip was taken in Ecuador and posted by journalist Carlos Vera on Twitter.
The video also shows how messages look when they are forwarded on Facebook Messenger. Notice that despite being forwarded through a chain of people, the app does not show that the message was frequently forwarded, as in the case of WhatsApp.
Further, while the content itself has already been fact checked by several fact-check organizations, the app also does not have labels identifying it as potentially misleading content.
The video shows how a message can be sent to chats with one other person, but Messenger users can message up to 150 people at once. From October 2018 to August 2019, members of the same Facebook group could even chat with up to 250 people all at once.
A few months later, before the Brazilian presidential election, there was a truck drivers’ strike that was organized entirely through WhatsApp. “That was the first moment we saw disinformation on WhatsApp,” said Tardáguila, describing the chaos that ensued. “And that was the first time I saw WhatsApp as very dangerous too.”
These instances pushed the world’s most popular chat app to introduce initiatives meant to curb both misinformation and disinformation on their platform. However, policies for Messenger rolled out at a snail’s pace and only in select countries.
This is problematic because in countries like the Philippines, WhatsApp places a low 4th in popularity, way behind Messenger. On this platform, users have been spreading misleading content on purpose and using it as a way of evading Facebook’s disinformation and misinformation policies. – with Gemma Bagayaua-Mendoza/Rappler.com
(To be continued)
Editor’s note: This story previously stated that users in the Philippines could forward content on Messenger more than 5 times at once as of February 2021. This has been corrected to reflect that Facebook Messenger started imposing limits in December 2020. The video in this story has also been corrected to show that there is a label that indicates a friend has “forwarded a video.”
*Name has been changed on the request of the interviewee to protect her clients and company.