artificial intelligence

YouTube to require disclosures for AI video uploads

Victor Barreiro Jr.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

YouTube to require disclosures for AI video uploads
YouTube says the policies are part of an evolving approach towards AI, which it acknowledged would 'introduce new risks and will require new approaches'

MANILA, Philippines – YouTube will be instituting new policies related to generative artificial intelligence-powered videos and content on its service.

In a blog post on Tuesday, November 14, YouTube said the policies were part of an evolving approach towards AI, which it acknowledged would “introduce new risks and will require new approaches.”

Among these are disclosure requirements for AI-powered video content, the ability to request removal of content in which a person’s face or voice is digitally generated without permission or to misrepresent the person, and enhancements to its moderation processes using AI to stamp out bad actors.

Disclosures for AI-powered content

In an effort to inform viewers on YouTube of synthetic or AI-powered content, the company said it would require disclosures for “altered or synthetic content that is realistic, including using AI tools.”

“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

YouTube sees such disclosures as important, especially in light of content made discussing sensitive topics, including political or religious topics, ongoing conflicts, or public health crises.

Any repeated failures to disclose this information will make such content subject to removal. Infractions may result in suspension from the YouTube Partner Program, among other penalties, though YouTube says it will work with creators before such a rollout “to make sure they understand these new requirements.”

Viewers, meanwhile, will see relevant labels for AI-powered content on videos if it applies based on disclosures made. Sensitive topics on YouTube will also get a more prominent label, should AI-powered videos apply in that context.

Some types of content, however, even if labeled correctly, may still be removed by YouTube upon their discretion. These include “synthetically created video that shows realistic violence… if its goal is to shock or disgust viewers.”

Fighting misrepresentation

Aside from this, YouTube will also allow users to report AI-generated or other synthetic or altered content made to look or sound like an identifiable individual using the service’s privacy request process.

According to their announcement, certain bars will be set which will determine whether content labeling or removal is appropriate for certain content. These depend on a number of factors, such as if the content is parody or satire, if the person making the request can be uniquely identified, or if such content features a public official or a well-known individual.

YouTube’s music partners can also request the removal of AI-powered content that “mimics an artist’s unique singing or rapping voice.” The announcement doesn’t say, however, whether whether music partners can take down parody content made using AI renditions of artist’s voices.

Granting removal requests, however, will be subject to various considerations, such as whether content is the subject of news reporting, analysis, or critique of the synthetic vocals. YouTube also said these removal requests “will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments. We’ll continue to expand access to additional labels and distributors over the coming months.”

Acknowledging AI, developing nuance

YouTube, in its announcement, acknowledged AI as both a means of creating violative content and detecting and reporting policy-breaking content. It said generative AI was being used to help understand context and identify policy-breakers more quickly and accurately, while also giving human reviewers breathing room, reducing how much harmful content they are exposed to.

Additionally, it also acknowledged bad actors may try to take advantage of the situation, so YouTube’s teams will incorporate feedback from users and work on improving its protections continuously. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Person, Human, Sleeve

author

Victor Barreiro Jr.

Victor Barreiro Jr is part of Rappler's Central Desk. An avid patron of role-playing games and science fiction and fantasy shows, he also yearns to do good in the world, and hopes his work with Rappler helps to increase the good that's out there.