Go and the end of humanity as we know it

Luis Buenaventura

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Go and the end of humanity as we know it
The implications of creating a super-intelligent entity are weird, awe-inspiring, and admittedly quite frightening

 

The news of Google’s DeepMind project briefly swept the Internet last week as the search giant revealed that their AI platform AlphaGo had successfully defeated the Go champion Fan Hui. Social networks erupted with posts about Skynet, the Matrix, the Cylons, and a whole host of other obvious references to eschatological pop culture.

A couple of facts first, to take down the robot-overlord fever pitch a notch, followed by a few other facts, to really scare the living daylights out of you.

Some folks may wonder why this victory was considered a bigger achievement than the IBM Deep Blue chess victory over Gary Kasparov in 1997. The main reason is because of the game itself: Chess’ opening move has 20 possibilities, but Go’s opening move has 361 — nearly 20 times more possible outcomes.

The average chess game is complete after 40 moves have been played, but the average Go game runs for about 200. From a purely mathematical standpoint then, Go is significantly deeper in its decision-making than chess. (For a program to sort through every potential move and outcome in Go would require a computational effort equivalent to evaluating every atom in the known universe!)

As impressive as Google’s achievement was, however, it is necessary to qualify their victory somewhat.

At the time of Kasparov’s chess defeat, he had held the world rank of #1 for about twelve years and would hold it for another eight. He was, quite literally, the finest chess player the human race had yet produced. Meanwhile, Fan Hui, the European Go champion that DeepMind beat, is ranked 633rd in the world. (Go is apparently not a very popular game in Europe.)

It stands to reason that there are quite a few other humans that DeepMind is not capable of beating — in fact, by most estimates it’s probably barely good enough to break into the top 250 players list.

Accelerating change

Computers have been trying to beat humans at Chess and Go since the 1950’s, but the first major breakthroughs didn’t occur until 1977 when Northwestern University’s “Chess” won a tournament against human participants. By 1980, the program Belle was regularly beating Master-level chess players, but it would be another 17 years before software would achieve parity with Kasparov’s wetware, and another 19 before programs would begin to give the Go professionals a run for their money.

It’s inevitable that AI will beat us at all “perfect information” strategy games like Chess and Go though. Technology continues to accelerate at a pace that increasingly feels like science fiction in the news — the rise of consumer-accessible DNA testing, driver-less cars, bionic limbs, $20 smartphones, the Internet of Things, an AI capable of defeating Jeopardy champions — all of these breakthroughs happened in just the last 10 years. To borrow a term from the startup world, humankind has found itself at the foot of a “hockey-stick graph” of epic proportions.

Scientists believe that all of these advances will eventually come to a head, a moment in time where humanity’s advances will come together in one final creation that would change the course of our species.

Mathematician John Von Neumann (1903-1957) referred to this moment as “The Technological Singularity,” a turning point in our civilization where our inventions – robot computer networks, say – will have become super-intelligent and render our own abilities obsolete.

Visionary scientist Ray Kurweil predicts that this singularity could occur by 2045, which is within the lifetimes of the vast majority of the people reading this. He specifically predicts that we will have reverse-engineered the human brain within the next 10-15 years, and begin walking down the path of no return soon thereafter.

A Homo Sapiens superior?

The implications of creating a super-intelligent entity are weird, awe-inspiring, and admittedly quite frightening. As author Lev Grossman puts it, “introducing a superior life-form into your own biosphere is a basic Darwinian error.”

To state it another way: it didn’t work out so well for the Neanderthals when Homo Sapiens came on the scene 60,000 years ago.

So why aren’t we putting the brakes on AI research? Because “innovators gonna innovate,” to misappropriate a popular phrase. A digital super-intelligence would be humanity’s single greatest, and likely final, invention, and the ramifications of such a creation cannot be overstated.

Once online, this super-intelligence would be a verifiable god. It would be able to improve on all of our current technology at a timescale that we would find hard to fathom — hours and minutes between innovations, instead of months and years — and due to the exponential nature of learning, each new improvement would arrive faster than the last.

Using a perfected form of nanotechnology, for example, it could have the power to end hunger, eradicate disease, eliminate pollution … and, if we design it wrong, potentially wipe out all of humanity.

After all, it’s arguably mankind’s own actions at the heart of all these global problems, so wouldn’t an ambivalent super-intelligent being simply conclude that the universe would be better off without us? Why expend the energy to clean up humanity’s mistakes over and over again if one could simply nip the problem in the proverbial bud once and for all?

The extinction agenda

It’s the kind of thought that keeps you up at night. Certainly the idea that a creation could usurp its progenitors and proceed to take over the planet sounds like the plot of every dystopic sci-fi movie you’ve ever seen, but there’s enough truth there to worry some of our smartest thinkers. Indeed, Elon Musk calls AI our “biggest existential threat,” and Bill Gates says he doesn’t understand “why some people are not concerned” about its consequences.

AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute reduces the concern to its material essence: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

In response, other scientists have proposed ways to build AI whose core goal is to be friendly to humans, with the reasoning that a “friend” would never harm us once that definition was properly internalized.

Others have theorized that we could teach AI to mimic the ethics of a few admirable people (Mahatma Gandhi, Martin Luther King Jr., Abraham Lincoln, etc.), essentially transforming it into our very own benevolent digital dictator. (The political minefield that you would have to maneuver in order to compile such a list of “admirable” personalities hasn’t yet been figured out.)

Again, we must ask, why are we walking down such a dangerous path in the first place? Because the flipside of the debate is simply too good to ignore.

If we built the AI properly, it could represent our salvation – rather than our extinction – as a species. The optimist’s view of an omniscient AI is utopic: with it, we could potentially solve every problem that has ever vexed humanity, including death itself. The allure of immortality, of a multi-planetary civilization, and of exploring our universe directly instead of through telescopes and charts is intoxicating. For many scientists, it’s the entire reason they became scientists in the first place. Pursuing super-intelligent AI is a way to make all of these dreams possible within our lifetimes rather than much, much later: instead of trying to design the various machines that will allow us to reach our goals, let’s design the singular machine that will design all the others.

And therein lies the true turning point for our civilization. In just under 3 decades we will have achieved the necessary technology to either take us to the stars, or end our reign on this planet. It’ll be the most important decision our species will collectively make, and thankfully there’s still time to make sure it isn’t the wrong one.

Erring is, after all, largely a human activity. – Rappler.com

 

Artificial intelligence concept with chess image from Shutterstock

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!