artificial intelligence

UK focuses on transparency and access with new AI principles

Reuters

This is AI generated summarization, which may have errors. For context, always refer to the full article.

UK focuses on transparency and access with new AI principles

ARTIFICIAL INTELLIGENCE. AI letters are placed on computer motherboard in this illustration taken June 23, 2023

Dado Ruvic/Reuters

The UK seeks to make AI foundation model developers accountable, and to ensure that such technologies aren't just controlled by a few large companies

Britain set out principles on Monday, September 18, designed to prevent artificial intelligence (AI) models from being dominated by a handful of tech companies to the detriment of consumers and businesses, by emphasizing the need for accountability and transparency.

Britain’s anti-trust regulator, the Competition and Markets Authority (CMA), is, like other authorities around the world, trying to control some of the potential negative consequences of AI without stifling innovation.

The seven principles it listed aim to regulate foundational models such as ChatGPT by making developers accountable, by preventing Big Tech tying up the tech in their walled platforms, and by stopping anti-competitive conduct like bundling.

CMA chief executive Sarah Cardell said on Monday there was real potential for the technology to turbocharge productivity and make millions of everyday tasks easier – but a positive future could not be taken for granted.

She said there was a risk that the use of AI could be dominated by a few players who exert market power that prevents the full benefits being felt across the economy.

“That’s why we have today proposed these new principles and launched a broad program of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers,” she said.

The CMA’s proposed principles, which come six weeks before Britain hosts a global AI safety summit, will underpin its approach to AI when it assumes new powers in the coming months to oversee digital markets.

It said it would now seek views from leading AI developers such as Google, Meta, OpenAI, Microsoft, NVIDIA and Anthropic, as well as governments, academics and other regulators.

The proposed principles are listed below, as published on the Gov.UK website:

  • Accountability – Foundation model (FM) developers and deployers are accountable for outputs provided to consumers.
  • Access – ongoing ready access to key inputs, without unnecessary restrictions.
  • Diversity – sustained diversity of business models, including both open and closed.
  • Choice – sufficient choice for businesses so they can decide how to use FMs.
  • Flexibility – having the flexibility to switch and/or use multiple FMs according to need.
  • Fair dealing – no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
  • Transparency – consumers and businesses are given information about the risks and limitations of FM-generated content so they can make informed choices.

Britain in March opted to split regulatory responsibility for AI between the CMA and other bodies that oversee human rights and health and safety rather than creating a new regulator.

The United States is looking at possible rules to regulate AI and digital ministers from the Group of Seven leading economies agreed in April to adopt “risk-based” regulation that would also preserve an open environment. – with reports from Gelo Gonzales/Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI