Disinformation

Disinformation in 2023: Growing AI reliance, X’s reckoning, tech guardrails still absent

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Disinformation in 2023: Growing AI reliance, X’s reckoning, tech guardrails still absent
As we prepare to continue to tackle disinformation in 2024, here are the big trends from this year

MANILA, Philippines – Disinformation remains rampant. One only has to look at the ongoing Israel-Hamas war that has observers in chaos as to which is real and which is not. 

Elon Musk’s X, and its demolition of its trust and safety team early in his ownership stint along with other questionable policies, have supercharged this online mess, and cheap AI-powered tools for making videos and photos will only make things worse. 

It’s a problem we will continue to tackle in 2024, but as the year draws to a close, we look back at the trends in disinformation from this year to hopefully find some lessons to bring in fighting the fake next year.

Still human-powered, but AI reliance to increase

Everyone has already said it: generative AI tools making it easier to produce fake images and videos. 

Remember viral images of the Pope in a fashionable coat or Donald Trump getting arrested that turned out to be fake and AI-generated? As Gordon Crovitz said, a co-chief executive of NewsGuard, a company that tracks online misinformation, “Crafting a new false narrative can now be done at dramatic scale, and much more frequently – it’s like having AI agents contributing to disinformation.”

DEEPFAKED POPE. Deepfaked images of Pope Francis in a puffy coat go viral online.

However, the AI-powered disinformation deluge didn’t happen this year. While the threat remains, this year’s Freedom on the Net report by human rights advocacy group Freedom House says that much of the work in disinformation this year “is still done by humans,” noting how 47 countries used pro-government commentators to control the narrative, mostly outsourcing the work to private firms to retain plausible deniability. 

The report warns, however, that these networks will “increase their reliance on AI-based tools that can create text, audio, images, and video en masse” as these tools are affordable and easy to use.

“It lowers the barrier of entry to the disinformation market,” the report says, with the spread of AI-generated disinformation aided by already existing online networks. 

“Those with the financial or political incentives will have the capacity to create false and misleading information with these tools, and then leverage existing networks to distribute it at scale.”

The report found 16 countries this year that used AI-based tools to “distort information on political or social issues” or gain political support and smear political opponents. 

It also noted, “Electoral periods and moments of political crisis served as flashpoints for AI-generated content.”

With major elections coming in 2024, most especially the US presidential elections, how AI will be used in disinformation operations will be in the spotlight. At least one US news outlet is calling the upcoming elections as the “1st AI elections,” while a recent poll reported in November 2023 found that nearly 60% of US citizens believe that AI will amplify disinformation at a scale never before seen.

AI-generated news anchors spreading propaganda

In February 2023, state media outlets in Venezuela used fake AI American newscasters to spread fake news, produced using an online tool called Synthesia. For just $30 a month, the fake AI newscasters said that national issues such as hyperinflation and food shortages were overblown. 

In the same month, a report by Graphika also found similar use of Synthesia’s tools, this time in a campaign that used a nonexistent news network Wolf News anchored by caucasian-looking AI anchors, spreading pro-Chinese Communist Party disinformation. 

The videos were noted to be of poor quality, but these incidents are early attempts at leveraging AI in disinformation campaigns, and will likely improve with the technology. 

Must Read

AI-enabled disinformation: Waging an unviable war of scale

AI-enabled disinformation: Waging an unviable war of scale

In related local news, AI generated sportscasters were also introduced by GMA Network, which drew some flak, but not because they spread disinformation, and rather because they hit a nerve amongst people fearing that AI will take over jobs, most especially of course, among media workers and media students.

At risk of increasing: nonconsensual deepfakes targeting women

The Freedom on the Net report highlights how AI-generated disinformation campaigns “disproportionately victimize and vilify segments of society that are already under threat.” 

“The overwhelming majority of nonconsensual deepfakes featuring sexual imagery target women, often with the aim of damaging their reputations and driving them out of the public sphere.”

This has been happening as early as 2018 with Indian journalist Rana Ayyub, and US disinformation expert Nina Jancowicz. 

Again, the improvements in generative AI will make it easier for disinformation actors to target women with pornographic deepfake videos meant to sully reputations. 

Maggie Wilson vs. trolls

One woman who wouldn’t let herself be bullied this year though was Filipino celebrity Maggie Wilson.

Wilson exposed a network of content creators on TikTok that used the same script, flow, hashtags, and graphics such as screenshots, in attempts to smear her and her company Acasa Manila. The creators, as revealed in screenshots shown by Wilson, were paid around P8,000 to post the content in what was a coordinated campaign.

Sharp-eyed observers also dug deeper, and found that some of the content creators in the campaign against Wilson were also the same creators that were amplifying content promoting President Ferdinand Marcos Jr.’s programs and activities, with similar scripts, narrative flow, and hashtags. 

Safeguards urgently needed 

Leading the world in AI regulation is the EU with its proposed AI Act, but as of November 2023, it’s facing stiff challenges from lobbyists looking to lessen proposals on the regulation of powerful foundational AI models. 

AI’s progress is happening at a rapid pace, and its adoption is happening just as quickly. But regulation, as expected, is a slow and arduous journey. This is the dynamic that will shape the kind of generative AI tools that people will have access to not just in 2024, but in the coming years. 

It’s the same kind of dynamic that we saw with social media companies moving fast, and breaking things, which indeed led to broken things. Will AI break even more?

Cambridge Analytica whistleblower Christopher Wylie illustrated how preposterous the lack of regulation on such powerful technologies is:

“If I want to make any other kind of consumer product, like a toaster, there are more safety standards, there are more testing requirements for a toaster to put in your kitchen than AI – because there are no requirements to test your AI for safety or harmful effects. You can just release it. We do not allow any other consumer product to just be released without any kind of safety standards….”

“And now when you look at generative AI, we are on the precipice of something monumental. And we have no safety framework, no regulatory framework. We are so unprepared for this. And we are just allowing an industry to go and just do it, and experiment with society. We are now a massive petri dish for these companies,” Wylie said.

At the Nobel Prize Summit 2023, Rappler CEO and Nobel Peace Prize laureate Maria Ressa, and journalist and co-laureate Dmitry Muratov presented a 10-point plan to reel in Big Tech. Ressa warned, “The tech has gone exponential, exponential. And we’re still moving at glacial speed…The window to act is closing.”

This year, among the top solutions being bandied about by western governments is a labeling system for AI content that will immediately inform users when a piece of content is made with AI. 

Meta’s findings for 2023

Meta releases a quarterly report on disinformation networks on their platforms. In the Q3 edition for 2023, they included a section on some of their insights and observations on how the information ecosystem will look like in 2024. 

Here’s a summary:

  • Disinformation actors copy-paste “authentic partisan debate” from public or political figures on both sides of the aisle to either “exacerbate” already existing tensions or build an audience to be targeted with different content later.
  • Disinformation operations are decentralizing, meaning they’re dispersing efforts across many different, often smaller and sometimes niche platforms, to avoid instant dismantling of networks when shut down on the bigger platforms.
  • Meta has a focus on China and Russia operations, noting that influence operations may pivot to influence debates in countries relevant to their interests, especially during elections.
  • Actors may engage in “perception hacking” where they try to exaggerate an image of massive influence in order to “sow doubt in democratic processes.” The campaigns attempt to project power to make people feel powerless about the situation, causing them to lose trust in processes such as the elections. 
  • Meta has seen several operations since 2019 worldwide that saw fake hacktivists claiming to have hacked documents as a way of manipulating public debate, and sowing distrust in electoral institutions – a technique that Meta says is likely to continue. 
  • “Blended operations” have also been a trend, with actors combining traditional networked disinformation efforts with real hacking attempts at real users to use them in their networks, and the use of Facebook’s reporting tools to silence critical accounts or pages. 
Must Read

Meta takes down 5 Chinese networks in 2023, the most of any country this year

Meta takes down 5 Chinese networks in 2023, the most of any country this year

A Rappler story also found how disinformation peddlers this year are attempting to circumvent moderation with techniques such as intentional misspellings and code words or by being vague about the claims. Similar to Meta’s report, the peddlers are co-actively using other platforms which do not have the same regulations.

Weaker pro-China propaganda online, some efforts moving offline
Electronics, Hardware, Computer Hardware

A Rappler investigation found a network of accounts, Groups, and Pages – some of which presented themselves to be a thinktank – which promoted pro-China propaganda, and echoed the Chinese government’s statements on key issues and during flashpoint moments such as the water cannon incident in Ayungin Shoal, and China’s installation of missiles in its artificial islands in the South China Sea. 

The network peaked during the latter years of the Duterte administration but grew weaker in the turnover to Marcos Jr., especially with the current government pivoting back to the US. 

With the network’s online influence currently weak, it has turned to other means such as guesting on news panels or acting as resource persons in physical events. 

The network’s weakening, as noted in the Rappler report, also mirrors the most recent findings from Meta, which took down two Chinese networks in Q3 2023 – one of which had about 4,800 accounts – that weren’t able to build a significant audience nor reach. 

Disinformation in the Israel-Hamas war

Similar to Russia’s invasion of Ukraine before it, the Israel-Hamas war has battles on the disinformation front as well, in yet another example of how truly chaotic the information ecosystem becomes in hyper volatile, fast-moving events such as this. 

Like the Russia-Ukraine conflict, the Israel-Hamas has seen video game footage and footage taken from other incidents going viral, and passed off as real events from the said war.

On the government front, both the EU and the US have questioned the major platforms on how they’re moderating the deluge of inaccurate information, including accusations that TikTok is pushing pro-Palestinian content. 

In the Philippines, discussion about the conflict has seen religion take on a big role. A significant part of the narratives center on Scriptures highlighting Israel’s biblical significance, the portrayal of Islam as an inherently violent religion, along with antisemitic posts. There are voices combatting the antisemitic, and Islamophobic narratives, challenging the prevailing stereotypes.

Other countries such as Russia and entities in India have also been reported to leverage the situation. The former is pushing the narrative that Ukraine has been selling NATO-provided weapons to Hamas, while right-wing Indian accounts are amplifying anti-Palestine and anti-Islam content. 

Must Read

In the Philippines, religion plays big role in Israel-Hamas discussion

In the Philippines, religion plays big role in Israel-Hamas discussion

Leveraging crises in other parts of the world for one’s own country isn’t new. Earlier in February, global security nonprofit Soufan Center warned against Beijing continuing to support Russia, which includes information operations. 

“With Chinese support for Russia showing increased strength of late – rather than abating – it seems likely that Beijing will also increase its support for Moscow through disinformation and influence campaigns,” it said, also noting that, in supporting Russia, it sees an opportunity to position itself as an alternative to a US-led world order. 

As global tensions continue, disinformation observers need to watch how certain countries are using these conflict situations to sway public opinion and influence perception. 

Among the platforms, X, formerly Twitter, is in the hot seat for disinformation related to the war, with several troubling reports from researchers including its failure to remove 98% of hate speech, and that 74% of all disinformation from the platform is coming from “verified” blue-checked accounts. 

A reckoning for X 

Technology Transparency Project (TTP) director Katie Paul, as quoted by NBC News, said that while X had been the industry leader for combating false information in the past, the reverse is true now. “That leadership role has remained, but in the reverse direction,” Paul said.

Aside from gutting his trust and safety team early in his ownership tenure, Elon Musk has made several moves that contribute to the spread of disinformation on the platform. These include having purchasable verification blue check marks that allow for prioritized rankings on the feed and make it hard to find truly reputable information sources on the platform. 

Its “Community Notes” feature, which hands off fact-checking to users, has been ineffective, with Wired noting it “may be vulnerable to coordinated manipulation by outside groups, and lacks transparency about how notes are approved. Sources also claim that it is filled with in-fighting and disinformation, and there appears to be no real oversight from the company itself.”

As hate speech, particularly anti-Semitic sentiment, rises on the platforms, major advertisers have begun to flee the platform. Musk’s changes to X were financially motivated, as many observers believed that he had overpaid for the platform. But these changes, also fueled by Musk’s beliefs of being a “free speech absolutist” have led to the exodus of its big advertisers, leading to potential losses of up to $75 million by the end of the year.

Even prior to these pullouts, ad revenue at X has declined every month since Musk’s takeover.

Marcos lies persist despite administration’s launch of campaign against fake news

Ferdinand Marcos Jr., himself a beneficiary of disinformation in the 2022 elections, launched a campaign against fake news in June 2023. He said, “fake news should have no place in modern society” and launched a youth-focused “media and information literacy campaign.” 

But in his first year of presidency, Rappler found and fact-checked 130 claims related to the Marcoses, with 48 of them still pertaining to the purported Marcos gold, and the bank accounts in which they are deposited – one of the most persistent debunked lies that helped in his presidential campaign. 

After the gold claims, persistent themes were propaganda attempting to court public support, projects falsely attributed to the Marcoses, and the whitewashing of Bongbong’s father, former president Ferdinand Marcos’ legacy. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.