artificial intelligence

OpenAI seeks to allay election meddling fears in blog post

Reuters

This is AI generated summarization, which may have errors. For context, always refer to the full article.

OpenAI seeks to allay election meddling fears in blog post

OPENAI. A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023.

Dado Ruvic/Reuters

OpenAI says that in the US, which will hold presidential elections this year, it is working with the National Association of Secretaries of State, an organization that focuses on promoting effective democratic processes such as elections

SAN FRANCISCO,USA – Artificial intelligence lab OpenAI published a blog post Monday, January 15, seeking to address fears that its technology will meddle with elections, as more than a third of the globe prepares to head to the polls this year.

The use of AI to interfere with election integrity has been a concern since the Microsoft-backed company released two products: ChatGPT, which can mimic human writing convincingly, and DALL-E, whose technology can be used to create “deepfakes,” or realistic-looking images that are fabricated.

Those worried include OpenAI’s own CEO Sam Altman, who testified in Congress in May that he was “nervous” about generative AI’s ability to compromise election integrity through “one-on-one interactive disinformation.”

The San Francisco-based company said that in the United States, which will hold presidential elections this year, it is working with the National Association of Secretaries of State, an organization that focuses on promoting effective democratic processes such as elections.

ChatGPT will direct users to CanIVote.org when asked certain election-related questions, it added.

The company also said it is working on making it more obvious when images are AI-generated using DALL-E, and is planning to put a “cr” icon on images to indicate it was AI-generated, following a protocol created by the Coalition for Content Provenance and Authenticity.

It is also working on ways to identify DALL-E-generated content even after images have been modified.

In its blog post, OpenAI emphasized that its policies prohibit its technology to be used in ways it has identified as potentially abusive, such as creating chatbots pretending to be real people, or discouraging voting.

It also prohibits DALL-E from creating images of real people, including political candidates, it said.

The company faces challenges policing what is actually happening on its platform.

When Reuters last year tried to create images of Donald Trump and Joe Biden, the request was blocked and a message appeared saying it “may not follow our content policy.”

Reuters, however, was able to create images of at least a dozen other U.S. politicians, including former Vice President Mike Pence. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!