United States

US standards body says ByteDance researcher wrongly added to AI safety groupchat

Reuters

This is AI generated summarization, which may have errors. For context, always refer to the full article.

US standards body says ByteDance researcher wrongly added to AI safety groupchat

BYTEDANCE'S TIKTOK. A person arrives at the offices of TikTok after the US House of Representatives overwhelmingly passed a bill that would give TikTok's Chinese owner ByteDance about six months to divest the US assets of the short-video app or face a ban, in Culver City, California, USA, on March 13, 2024.

Mike Blake/Reuters

'Once NIST became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium's code of conduct on misrepresentation,' the US National Institute of Standards and Technology says

WASHINGTON, DC, USA – A researcher from TikTok’s Chinese owner ByteDance was wrongly added to a group chat for American artificial intelligence safety experts last week, the US National Institute of Standards and Technology (NIST) said Monday, March 18.

The researcher was added to a Slack instance for discussions between members of NIST’s US Artificial Intelligence Safety Institute Consortium, according to a person familiar with the matter.

In an email, NIST said the researcher was added by a member of the consortium as a volunteer.

“Once NIST became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium’s code of conduct on misrepresentation,” the email said.

The researcher, whose LinkedIn profile says she is based in California, did not return messages; ByteDance did not respond to emails seeking comment.

The person familiar with the matter said the appearance of a ByteDance researcher raised eyebrows in the consortium because the company is not a member and TikTok is at the center of a national debate over whether the popular app has opened a backdoor for the Chinese government to spy on, or manipulate Americans at scale. Last week, the US House of Representatives passed a bill to force ByteDance to divest itself of TikTok or face a nationwide ban; the ultimatum faces an uncertain path in the Senate.

The AI Safety Institute is intended to evaluate the risks of cutting-edge artificial intelligence programs. Announced last year, the institute was set up under NIST and the founding members of its consortium include hundreds of major American tech companies, universities, AI startups, nongovernmental organizations and others, including Reuters’ parent company Thomson Reuters.

Among other things, the consortium works to develop guidelines for the safe deployment of AI programs and to help AI researchers find and fix security vulnerabilities in their models. NIST said the Slack instance for the consortium includes about 850 users. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!