artificial intelligence

AI can exacerbate not only disinformation, but also surveillance – expert

Christa Escudero

This is AI generated summarization, which may have errors. For context, always refer to the full article.

AI can exacerbate not only disinformation, but also surveillance – expert
‘Surveillance is the economic engine of the tech industry…. I think we have to read AI as almost a surveillance derivative,’ says Meredith Whittaker, president of messaging app nonprofit Signal

Much has been said about artificial intelligence (AI) having the power to spread disinformation at scale. Can AI also scale the surveillance business model through which internet tools are built?

Tech experts discussed this possibility in The AI Series, a special edition of media outlet Al Jazeera’s Studio B: Unscripted with Rappler CEO and Nobel Peace Prize laureate Maria Ressa.

“Surveillance is the economic engine of the tech industry…. I think we have to read AI as almost a surveillance derivative,” said Meredith Whittaker, president of secure messaging app nonprofit Signal, as she cited a 2012 study that recognized how user data collected by social media platforms can also be used to train AI. 

“It pulls from this surveillance business model. It requires this surveillance data and the infrastructures that are constructed to process and store this data. And it produces more data as it heightens the surveillance ecosystem that we all live in.” 

At the Nobel Prize Summit in May 2023, Ressa similarly slammed the business model of social media platforms, in which users are tracked and targeted to maximize engagement online and influence behavior offline. This model, which she dubbed “surveillance for profit,” has caused the spread of disinformation, hate, and violence online. 

Camille Francois, co-leader of the Innovation Lab on AI and Democracy at Columbia University’s Institute of Global Politics, also flagged the speed of which AI technologies are being deployed without safety checks.

From Our Archives

Safety evaluations, ethical standards for digital products are non-existent – Chris Wylie

Safety evaluations, ethical standards for digital products are non-existent – Chris Wylie

“Back when it was very much still research projects or conversations among practitioners, we kind of had the luxury to ask ourselves: what does it mean – for instance, for disinformation – that now everybody can produce synthetic text? Or what does it mean when we know that there are biases and stereotypes that are embedded in these machines? How would we go about thinking about [their] impact on society?” she elaborated.

“And I think that changes scale in terms of the urgency of those questions when, suddenly, everybody has access to these technologies and they’re suddenly being deployed really quickly in society.” 

In an earlier episode of The AI Series, experts Mike Wooldridge and Urvashi Aneja, together with Ressa, underscored how AI tools like ChatGPT can spread disinformation and hate speech at scale because of how they’re trained on internet data, which is also filled with disinformation and hate speech.

Must Read

What happens when AI reaps what it sows?

What happens when AI reaps what it sows?

“Those decisions were not made because they were reviewing the scholarship…or recognizing other social consequences,” Whittaker asserted.

“Those decisions were made because every quarter, they need to report to their board positive predictions or results around profit and growth. So we have these powerful technologies being ultimately controlled by a handful of companies that will always put [profit and growth] first.”

On government regulation

Should the use of AI be regulated by the government? Not quite, said Whittaker.

“The solution space, in my view, seems to not go far enough,” she commented, adding that recently-passed laws on tech regulation have not attacked the surveillance business model that runs digital platforms.

“The solutions often look a lot like extending surveillance and control to government, expanding the surveillance apparatus of large tech companies to…government actors who will then have a hand in determining what is acceptable speech, what is acceptable content.”

Whittaker pointed out the rise of authoritarian rule all over the world. In the United States, she cited, issues of reproductive health care being criminalized or books being banned prevail. Meanwhile, in the Philippines, advocates are attacked online and offline for criticizing the government.

Francois suggested a diversity of platforms that are not tied to surveillance capitalism business models and instead operate based on public interest, security, and safety – like Whittaker’s Signal, for instance.

AI can exacerbate not only disinformation, but also surveillance – expert

A November 2023 article on the Signal website explained that, by 2025, the platform will need at least $50 million (around P2.8 billion) for it to run, adding that the amount “is very lean compared to other popular messaging apps that don’t respect your privacy.”

In 2023, Meta earned $134.90 billion in revenue while incurring $88.15 billion in costs and expenses.

Whittaker also suggested the “classic answer” of “workers banding together to use their collective leverage to push their employers.”

As an example, she cited the Writers Guild of America (WGA) strike in 2023, during which film and television show writers stopped working for several months as they sought, among others, for AI use to be regulated in scriptwriting. 

“[W]e should and we are able to invent alternative futures, alternative models, and then to say, indeed, this is not how we want to live with technology,” said Francois.

“It’s not being a Luddite to say these are not models that should continue. Let’s invent alternative futures that are more rights-preserving, that are better for society.”

The AI Series takes a deep dive into the promises and the dangers of AI, and what the public can do about them. Watch it on Al Jazeera’s Studio B: Unscripted here. –

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Download the Rappler App!
Avatar photo


Christa Escudero

Christa Escudero is a digital communications specialist for Rappler.