artificial intelligence

US to launch its own AI safety institute


This is AI generated summarization, which may have errors. For context, always refer to the full article.

US to launch its own AI safety institute

RAIMONDO. US Commerce Secretary Gina Raimondo speaks on Day 1 of the AI Safety Summit at Bletchley Park in Bletchley, Britain on November 1, 2023. The UK Government are hosting the AI Safety Summit bringing together international governments, leading AI companies, civil society groups and experts in research to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.

Leon Neal/Pool via Reuters

The institute will facilitate in developing standards for safety, security, and testing of AI models, AI content authentication, and provide testing environments for researchers

The United States will launch a AI safety institute to evaluate known and emerging risks of so-called “frontier” artificial intelligence models, Secretary of Commerce Gina Raimondo said on Wednesday, November 1.

“I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium,” she said in a speech to the AI Safety Summit in Britain.

“We can’t do it alone, the private sector must step up.”

Raimondo added that she would also commit to the US institute establishing a formal partnership with the United Kingdom Safety Institute.

The new effort will be under the National Institute of Standards and Technology (NIST) and lead the US government’s efforts on AI safety, especially for reviewing advanced AI models.

The institute “will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts,” the department said.

President Joe Biden on Monday signed an artificial intelligence executive order, requiring developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the US government, in line with the Defense Production Act, before they are released to the public.

The order also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks. –

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Download the Rappler App!