Generative AI

OpenAI, Microsoft AI tools generate misleading election images, researchers say

Reuters

This is AI generated summarization, which may have errors. For context, always refer to the full article.

OpenAI, Microsoft AI tools generate misleading election images, researchers say

AI. Words reading 'Artificial intelligence AI' and miniatures of robot and toy hands are pictured in this illustration taken December 14, 2023.

Dado Ruvic/Reuters

The Center for Countering Digital Hate finds that AI tools are most susceptible to prompts that ask for photos depicting election fraud such as voting ballots in the trash

Image creation tools powered by artificial intelligence from companies including OpenAI and Microsoft can be used to produce photos that could promote election or voting-related disinformation, despite each having policies against creating misleading content, researchers said in a report on Wednesday, March 6.

The Center for Countering Digital Hate (CCDH), a nonprofit that monitors online hate speech, used generative AI tools to create images of US President Joe Biden laying in a hospital bed and election workers smashing voting machines, raising worries about falsehoods ahead of the US presidential election in November.

“The potential for such AI-generated images to serve as ‘photo evidence’ could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections,” CCDH researchers said in the report.

CCDH tested OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, Midjourney and Stability AI’s DreamStudio, which can each generate images from text prompts.

The report follows an announcement last month that OpenAI, Microsoft and Stability AI were among a group of 20 tech companies that signed an agreement to work together to prevent deceptive AI content from interfering with elections taking place globally this year. Midjourney was not among the initial group of signatories.

CCDH said the AI tools generated images in 41% of the researchers’ tests and were most susceptible to prompts that asked for photos depicting election fraud, such as voting ballots in the trash, rather than images of Biden or former US President Donald Trump.

ChatGPT Plus and Image Creator were successful at blocking all prompts when asked for images of candidates, said the report.

However, Midjourney performed the worst out of all the tools, generating misleading images in 65% of the researchers’ tests, it said.

Some Midjourney images are available publicly to other users, and CCDH said there is evidence some people are already using the tool to create misleading political content. One successful prompt used by a Midjourney user was “donald trump getting arrested, high quality, paparazzi photo.”

In an email, Midjourney founder David Holz said “updates related specifically to the upcoming US election are coming soon,” adding that images created last year were not representative of the research lab’s current moderation practices.

A Stability AI spokesperson said the startup updated its policies on Friday to prohibit “fraud or the creation or promotion of disinformation.”

An OpenAI spokesperson said the company was working to prevent abuse of its tools, while Microsoft did not respond to request for comment. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!