artificial intelligence

What happens when AI reaps what it sows?

Christa Escudero

This is AI generated summarization, which may have errors. For context, always refer to the full article.

What happens when AI reaps what it sows?

Screenshots from Al Jazeera

Artificial intelligence tools are trained on internet data, which is also a toxic pit of disinformation and hate speech. How can people make sure AI tools do not put out the same? Experts weigh in.

Artificial intelligence (AI) tools, like the groundbreaking large language model ChatGPT, are trained on data that comes from the digital world – content published on websites, the things people post on social media, and the like.

But at a time when digital data is proliferated by disinformation, hate speech, and other negative content, what can people expect these AI tools to put out?

This is what AI experts pondered on in The AI Series with Maria Ressa, a special edition of media outlet Al Jazeera’s Studio B: Unscripted with the Rappler CEO and Nobel Peace Prize laureate.

“The way that this technology is configured is that you download the whole of the World Wide Web, but you don’t have to look very hard on the World Wide Web to find all sorts of unpleasantness,” said Mike Wooldridge, author of A Brief History of AI and director of Foundational AI Research at the Alan Turing Institute.

What happens when AI reaps what it sows?

“If you go on some social media platforms they have types of unpleasantness that we could scarcely imagine. And if all of that is absorbed by a large language model, then it’s a seething cauldron of unpleasantness.”

Social media platforms have become a breeding ground for disinformation and hate speech, leading to violence online and offline, the election of authoritarians, and the erosion of democracy. Websites have not been spared either – a Rappler investigative report found that spammy domains have hounded news websites with toxic backlinks, making them look untrustworthy for search engines.

A result of this muddled information ecosystem is AI systems that “[reproduce] the future based on the past,” as pointed out by Urvashi Aneja, founder and director of Digital Futures Lab. 

“What that means is even the data that does exist already reflects historical patterns of injustice, of discrimination against women, against certain religions, against castes.”

Studies have warned that AI may perpetuate existing human biases and exclusions, such as in healthcare. In the 2020 documentary Coded Bias, Black researcher Joy Boulamwini, who investigated why her face is unrecognizable by facial recognition systems, found out the systems worked when she wore a white mask. 

Aneja also cited the millions of people who lack access to the internet, which leads to further bias and exclusion in terms of data, and as a result, in terms of AI output. 

As of 2020, the majority of countries with the lowest rates of internet access are in Asia and Africa, with India at more than 685 million or half their population. Meanwhile, North Korea, South Sudan, Eritrea, Burundi, and Somalia have a disconnected population of 90 to 100%. 

How to sow and reap better

For Wooldridge, AI companies must be transparent about the data on which their tools are trained.

While he acknowledged the ways AI companies have mitigated risks from AI being trained on existing data, like prompt engineering and content moderation, he called these “the technological equivalent of gaffer tape.”

“If this technology is owned by a small group of actors who develop this technology behind closed doors, we don’t get to see the training data. So you have no idea what this [technology] has been trained on.”

For Aneja, the regulation of the data economy is crucial.

What happens when AI reaps what it sows?

“We made a really bad bargain a decade and a half ago when we said that we are okay with giving up our data to get personalized services. Now we are paying the price for it, where we have a whole global economy that is based on the collection and monetization of our personal data,” she emphasized.

“So unless we don’t address that, we don’t address the misinformation, the disinformation, the information warfare problem.”

Both Wooldridge and Aneja also questioned if some AI systems should exist at all, especially in processes that need human judgment.

“For example, facial recognition technology. Yes, we can make it more inclusive, but do we want facial recognition technology in the first place? Or do we want to be using AI for credit scoring? Do we want to be using AI for job applications? Do we want to be using AI to decide whether someone gets parole or not? No. We don’t want to be using AI in those kinds of very critical decision-making,” Aneja said.

“I do not think it is acceptable that a machine decides autonomously to take a human life,” Wooldridge said about the use of AI in war. “Somebody who takes that decision on a battlefield has to be capable of empathy and understand the consequences of what it means for a human being to be deprived of their life.”

The AI Series with Maria Ressa takes a deep dive into the promises and the dangers of AI, and what the public can do about them. Watch it on Al Jazeera’s Studio B: Unscripted here. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Avatar photo

author

Christa Escudero

Christa Escudero is a digital communications specialist for Rappler.