press freedom

A brewing storm: Journalists being sidelined in the age of AI, and AI-enabled disinfo

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

A brewing storm: Journalists being sidelined in the age of AI, and AI-enabled disinfo

Shutterstock

What will the future hold for our information ecosystem as journalists experience layoffs in 2023 just as AI takes a stronger hold?

On Wednesday, May 3, Maria Ressa sits down with Cambridge Analytica whistleblower Chris Wylie for a talk that’s centered around what has become the most talked about tech in 2023, artificial intelligence, spurred by the release of AI chatbot ChatGPT on November 30, 2022. 

Artificial intelligence comes in an almost limitless number of expressions, but ChatGPT put the technology front and center, with the way it has made us humans feel like we’re talking to another very, very, very knowledgeable human being who’s merely typing away at the other end of the screen. 

We all know that’s not the case, but it’s become the most visceral, most readily appreciable example of how far AI technology has come. 

On World Press Freedom day, we ask what does this technology mean for journalists? 

An existential threat?

Already, there are signs that ChatGPT and other similar generative AI are replacing journalists. 

On the heels of the closure of Buzzfeed News, Gizmodo staff writer Lucas Ropek rounds up how it’s been happening so far, and calls for journalists to see the technology as an existential threat. Journalists have been marveling at the spectacle of AI, he says, but warns of the “corporate shift towards media automation.” 

Buzzfeed CEO Jonah Peretti was quoted as saying, the company is pivoting to “bring AI enhancements to every aspect of our sales process.” Insider global editor-in-chief Nicholas Carlson encouraged its newsroom to experiment with AI, calling it a “bicycle of the mind.” A week later, the company laid off 10% of its staff, including staff writers. 

CNET, which was caught earlier experimenting with AI without clear declaration of its use, also laid off 10% of its staff. Hours later, it announced it assigned its editor-in-chief to the new role of senior vice president of AI content strategy and editor-at-large. 

Often, corporate media denies that these layoffs are caused by the availability of these AI tools. One has to ask, who really benefits from the adoption of these AI tools, and less human heads to shoulder? And are things moving faster than we expected?

Perhaps there is indeed an economic downturn, but the presence of AI tools – unproven, prone to hallucinate –  may have emboldened the media suits, and made writers dispensable, at least from the perspective of corporate. 

Forbes has a more complete list of media lay-offs this year. 

On the other end, we have the whole disinformation industry, uninhibited by frivolous things such as ethics, which is ready to pounce. 

‘Unviable war of scale’

Rappler’s desk editor Victor Barreiro Jr., characterized AI-enabled disinformation as an “unviable war of scale.” 

Barreiro explains, “Active disinformation can now occur on a grander scale thanks to AI-assisted tools such as AI image manipulation and voice cloners to create convincing fakes of people’s faces and voices. 

These are used to not only spread fake things on the internet – such as made-up images of the Pope in a coat or of Donald Trump getting arrested – but are also being used in convincing financial scams

In a New York Times report on AI-enabled disinformation using chatbots, Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation, said that ‘Crafting a new false narrative can now be done at dramatic scale, and much more frequently – it’s like having AI agents contributing to disinformation.’” 

And then there’s also another study by NewsGuard on the rise of a new generation of ChatGPT-powered content farm clickbait sites that will publish anything to get clicks or eyes on their online ads. 

Perhaps unsurprisingly, among the sites found were in Tagalog, as well as 6 other languages. The Philippines has one of the most active online populations (4th worldwide in time spent on social media), which is but one factor why Cambridge Analytica had used the country before as a “petri dish,” according to Chris Wylie, for its psychological behavioral models for ad targeting. 

With the mention of CA, we come to our final point. AI is fancy but it’s also nothing new. 

Machine learning was already in use in detecting patterns from the massive amount of Facebook user data obtained by CA. To jog your memory, CA obtained that data via a third party researcher’s personality test app it partnered with. 

AI, and ChatGPT included, will continue to require more data to function well. 

Optimistically, maybe we are now more aware of our rights to our personal data, and whether we give it to giant tech companies.

Look further upstream

But more than the end product, the piece of AI-generated fake meme you see on your feed, we have to train ourselves to look further upstream, as Ressa has said before – to look at the giant machine upstream siphoning our data, and pumping out information ecosystem pollution, just so it could be an even more giant machine in the future. 

There’s a storm brewing. Journalists are losing jobs, to machines probably. The disinformation industry will use the power of AI free from ethics. Fake news on social media was already like an invisible atom bomb exploding in our information ecosystem, Ressa said in her Nobel speech.

What now, with AI?

We borrow from her speech again:

“It’s an arms race in the information ecosystem. To stop that requires a multilateral approach that all of us must be part of. It begins by restoring facts. We need information ecosystems that live and die by facts.”

With AI and algorithm looming larger, we need journalists more than ever, perhaps using AI to fight fire with fire, and for journalists to scrutinize these very tools on all aspects. Certainly, news of journalists being sidelined by AI and algorithm can’t be a good start. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.