Rise of the machines: AI beats humans in multiplayer shooter

Agence France-Presse

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Rise of the machines: AI beats humans in multiplayer shooter

AFP

The AI successfully learns the mechanics of a modified version of the seminal 1999 shooter 'Quake III Arena' and defeats human players

WASHINGTON, USA – Watch out professional gamers: machines may soon be coming for your jobs.

A team of programmers at a British artificial intelligence company has designed automated “agents” that taught themselves how to play a competitive first-person multiplayer video game shooter, and became so good they consistently beat human beings.

The work of the researchers from DeepMind, which is owned by Google’s parent company Alphabet, was described in a paper published in Science on Thursday, May 30 and marks the first time the feat has ever been accomplished.

To be sure, computers have been flexing their dominance over humans in one-on-one turn-based games such as chess ever since IBM’s Deep Blue beat Gary Kasparov in 1997. More recently, a GoogleAI agent beat the world’s number one Go player in 2017.

But the ability to play multiplayer games involving teamwork and interaction in complex environments had remained an insurmountable task.

For the study, the team led by Max Jaderberg worked on a modified version of Quake III Arena, a seminal shooter that was first released in 1999 but continues to thrive in the esports world.

The game mode they chose was “Capture the Flag,” which involves working with teammates to grab the opponent team’s flag while safeguarding your own, forcing players to devise complex strategies mixing aggression and defense.

After the agents had been given time to train themselves up, their prowess was matched up against professional games testers.

“Even after 12 hours of practice, the human game testers were only able to win 25% of games against the agent team,” the team wrote, while the agents’ performance remained superior even when their reaction times were artificially slowed down to human levels.

New steps for AI

The programmers relied on so-called “Reinforcement Learning”  (RL) to imbue the agents with their smarts.

“Initially, they knew nothing about the world and instead were doing completely random stuff and bouncing about the place,” Jaderberg told AFP.

The agents were taught to reward themselves for capturing the flag, but the team also devised a series of new and innovative methods to push the boundaries of what is possible with RL.

“One of the contributions of the paper is each agent learns its own internal reward signal,” said Jaderbeg, meaning that the AI players gave themselves a pat on the back of varying magnitude for accomplishing tasks such as picking up the flag or successfully shooting an opponent.

Next, they found that training a population of agents together, rather than one at a time, made the population as a whole learn much faster.

They also devised a new architecture of so-called “two timescale” learning, which Jaderberg likened to the thesis of the book “Thinking Fast and Slow.”

“You have one part of the agent which kicks very quickly, it updates its own beliefs very quickly, and you have another part of the agent, which updated belief at a slower rate, and these two beliefs influence each other and help shape the way the agent learns about the world,” he said.

Finally, randomizing the map for each new match was key. “That meant the solutions that the agents find have to be general – they cannot just memorize a sequence of actions,” said co-author Wojciech Czarnecki.

Ethics questions

The team did not comment, however, on the AI’s potential for future use in military settings.

DeepMind has publicly stated in the past that it is committed to never working on any military or surveillance projects, and the word “shoot” does not appear even once in the paper (the process is described instead as tagging opponents by pointing a laser gadget at them).

Moving forward, Jaderberg said his team would like to explore having the agents play in the full version of Quake III Arena and find ways his AI could work on problems outside of games.

“We use games, like Capture the Flag, as challenging environments to explore general concepts such as planning, strategy and memory, which we believe are essential to the development of algorithms that can be used to help solve real-world problems.” – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!