artificial intelligence

AI seeking to ‘reduce human control’ among risks tackled in Britain’s AI summit

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

AI seeking to ‘reduce human control’ among risks tackled in Britain’s AI summit

SAFETY SUMMIT. Michelle Donelan, Secretary of State for Science, Innovation and Technology of the UK arrives onstage ahead of the welcome photocall and family photo on Day 1 of the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023

Leon Neal/Pool via Reuters

While the current threat model for such a scenario is 'controversial,' some experts believe the current trajectory of AI development is leading to that

MANILA, Philippines – AI becoming sentient is a common sci-fi movie trope. But with the rapid growth of real-world AI in recent years, such a scenario has become an important point in discussions surrounding AI and its regulation.

At Britain’s AI Safety Summit – which has achieved a rare point of cooperation between superpowers at odds, the US and EU, and China – humans losing control over AI, and future AI systems increasing their influence by their own accord were identified as a risk albeit still a “hypothetical” one, along with more present concerns on disinformation, loss of jobs, discrimination, copyright issues, bias reinforcement, and concentration of power in a few tech companies. 

A discussion paper from the summit described the ways by which humans are already ceding control to AI systems, and how future AI systems may “actively seek to increase their own influence and reduce human control.” 

Present examples of humans ceding control include recommendation algorithms online that increase the consumption of extremist content, medical algorithms that have shown instances of misdiagnosing patients, a growing reliance on AI in economic production, and overestimations on how reliable generative AI systems are. 

“As AI systems become increasingly capable and autonomous, the economic and competitive incentives to deploy them will grow accordingly,” the paper said. 

The risk is that, “As a result, AI systems may increasingly steer society in a direction that is at odds with its long-term interests, even without any intention by any AI developer for this to happen.”  

“Some researchers are skeptical of our ability to assess the plausibility of hypothetical future scenarios like this, while others believe that this scenario is the default consequence of the current trajectory of AI development.”

Must Read

EXPLAINER: What’s in the Bletchley Declaration on AI?

EXPLAINER: What’s in the Bletchley Declaration on AI?
Early signs of the ability for manipulation, black box nature of AI systems

Current AI systems have also been noted to have some “early signs” of capability for manipulation including a chatbot forming trust and intimacy with a user, large language models (LLMs) being agreeable with a user’s views, the ability to predict a user’s views, and the ability to “maintain coherent lies” with larger LLMs being more persuasive liars. 

It also noted the possibility of AI systems being capable of exploiting vulnerabilities in computer systems, and autonomous replication and adaptation.

The paper also described current AI systems as “black boxes” even to their developers, meaning they can observe how the systems work. However, there is “little understanding of the internal mechanisms that produce them” making it “challenging to know how to change, much less how to predict, the behavior of an AI system.”

Does this mean movie-style AI sentience will happen? The paper pointed out that while there are some experts saying AI systems could take actions to increase their control, the current threat model for such a scenario is “controversial” with “hotly disputed” findings – and one that has been criticized by other experts as distracting from present harms. 

Still, the paper cited several recent studies that point towards an unpredictable AI – owing to the fact that these systems are largely black boxes as stated – especially more advanced ones, that “pursue unintended goals” that would be “advanced by reducing human control.” 

“Ensuring that AI systems do not pursue unintended goals, i.e., are not misaligned, is an unsolved technical research problem and one that is particularly challenging for highly advanced AI systems,” the paper said. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.