artificial intelligence

AI-enabled disinformation: Waging an unviable war of scale

Victor Barreiro Jr.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

AI-enabled disinformation: Waging an unviable war of scale
From simple misunderstandings of AI outputs to the active spread of disinformation created with AI's help, there's quite a large battlefield that needs to be covered, likely not just by human fact-checkers

One of the primary problems brought about by artificial intelligence in the era of the internet is the spread of disinformation at scale.

From simple misunderstandings or misinterpretations of AI outputs, to the active spread of disinformation created with the help of AI, there’s quite a large information battlefield that needs to be covered, likely not just by human fact-checkers.

As such, it’s also a problem that might only be fought at scale by AI as well.

Misreading AI outputs

One of the simplest, yet likely a very pervasive type of AI-enabled disinformation may come in the form of people not understanding how AI outputs from generative AI tools work.

Artificial intelligence isn’t really anything more than a set of tools we ascribe the title of “intelligence” to. For chatbots, the AI assigns values to words and mixes blocks of information together to try and create something cohesive that can pass for a legitimate answer to a given query.

The common caveat for AI chatbot use, however, is to always double-check the work since the answers an AI chatbot can give might not actually be correct at all.

These can range from providing potentially useless code on Google’s Bard chatbot for debugging purposes, to providing “right-seeming” answers to academic questions, or spreading conspiracy theories.

Of note as an example too is the limitation of using an AI travel itinerary maker whose data is based on 2021 information – it will ultimately require you to do the work anyway of finalizing everything based on currently available information. You just added an extra step to get a better idea of the lay of the land.

This “misreading” of AI-enabled outputs requires a warning for users of AI tools, which means the humans seeking help from chatbots and other generative AI should necessarily be savvy about how they find information to corroborate claims by AI.

Active disinformation

Aside from the unintentional blunders caused by attributing truth to AI-enabled information outputs, there’s also the shadier side of things that need to be discussed.

Active disinformation can now occur on a grander scale thanks to AI-assisted tools such as AI image manipulation and voice cloners to create convincing fakes of people’s faces and voices.

These are used to not only spread fake things on the internet – such as made-up images of the Pope in a coat or of Donald Trump getting arrested – but are also being used in convincing financial scams.

In a New York Times report on AI-enabled disinformation using chatbots, Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation, said that “Crafting a new false narrative can now be done at dramatic scale, and much more frequently – it’s like having AI agents contributing to disinformation.”

The report mentioned researchers were able to convincingly create pieces of disinformation, such as conspiracy theories, that were augmented with improved writing and style changes to seem more legitimate and believable.

Fighting back?

This begs the question, “How does one fight back against such rampant misuse of AI for disinformation?”

For some the answer is to use – to use a term very lightly – “good” AI to clean up the disinformation actively created by “bad” AI and unscrupulous people.

But as the Chicago Booth Review pointed out in a January thought piece, there’s no easy way to create an AI-enabled fake news detector, because it can only go so far as to create something that identifies what humans perceive to be fake news.

While we could train the data to try and spot fake news, it would need for us to determine what is real news and what is fake news. But we already fail at that on a human level, so how can we train the AI properly?

Said the writers of the thought piece, “Ideally, our training data set would include input data matched to output labels of real or fake. But the problem is we don’t actually know which items are real or fake. We don’t know the ground truth; we only know what humans judge to be real or fake. Our label is a label of human judgment.”

Ultimately, fighting disinformation at scale means relying on human ingenuity to fight a war it may not be able to win, to be constantly alert for falsehoods on the internet and in the real world, because we might not know when AI-enabled disinformation can strike next – at least, until we get proper guardrails in play and get rules enforced on the development of AI and its acceptable uses.

Maybe then will we have a fighting chance against the monster that’s been created. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Person, Human, Sleeve

author

Victor Barreiro Jr.

Victor Barreiro Jr is part of Rappler's Central Desk. An avid patron of role-playing games and science fiction and fantasy shows, he also yearns to do good in the world, and hopes his work with Rappler helps to increase the good that's out there.