Tech leaders say killer robots would be ‘dangerously destabilizing’ force in the world

The Washington Post

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Tech leaders say killer robots would be ‘dangerously destabilizing’ force in the world

AFP

As nations such as the US, China, Israel, South Korea, Russia, and the UK develop automated weapons systems, tech leaders band together to pledge not to help in the development and use of these 'slaughterbots'

The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia.

Among them are billionaire inventor and OpenAI founder Elon Musk, Skype founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind – the company’s premier machine learning research group.

In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week to not participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons in an effort “to create a future with strong international norms.”

“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” the pledge says.

“Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage,” the pledge adds. (READ: 23 principles to ‘best manage AI in coming decades’)

Lethal autonomous weapons systems can identify, target, and kill without human input, according to the Future of Life Institute, a Boston-based charity that organized the pledge and seeks to reduce risks posed by AI. The organization claims autonomous weapons systems do not include drones, which rely on human pilots and decision-makers to operate.

According to Human Rights Watch, autonomous weapons systems are being developed in many nations around the world – “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” FLI claims autonomous weapons systems will be at risk for hacking and likely to end up on the black market. The organization argues the systems should be subject to the same sort of international bans as biological and chemical weapons.

FLI has even coined a name for these weapons systems – “slaughterbots.”

The lack of human control also raises troubling ethical questions, according to Toby Walsh, a Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, who helped to organize the pledge.

“We cannot hand over the decision as to who lives and who dies to machines,” Walsh said, according to a statement from FLI. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Musk – arguably the pledge’s most recognizable name – has become an outspoken critic of autonomous weapons and the rise of autonomous machines. The Tesla chief executive has said that artificial intelligence is more of a risk to the world than North Korea.

Last year, he joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban autonomous weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, including Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned in an open letter in August.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”

According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Fighting killer robots with public declarations might seem ineffective, but Yoshua Bengio – an AI expert at the Montreal Institute for Learning Algorithms – told the Guardian that the pledge could rally public opinion against autonomous weapons.

“This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines,” he said. “American companies have stopped building landmines.” – © 2018. Washington Post

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!