This Is Why A Computer Winning At Go Is Such A Big Deal

17/02/2017
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
New Post ▾
News
Mission Control ▾
Videos
Data ▾
Quizzes
More Tools ▾
Tasty
Report an Issue
DIY
More
55,197
Edit
Post Optimizer
More
Get Our News App
This Is Why A Computer Winning At Go
Is Such A Big Deal
People didn’t think this would happen for at least 10 years; it’s a sign of how far
artificial intelligence has come.
posted on Mar. 14, 2016, at 12:46 p.m.
Tom Chivers
BuzzFeed Science Writer
A London Nightclub
Accused Of Refusing
Entry To “Overweight”
And “Dark-Skinned”
Women Will Be
Investigated
by Fiona Rutherford
For the first time in history, a computer has beaten the
human world champion at Go.
Connect With
UKNews
Like Us On Facebook
Follow Us On Twitter
More News
Tony Blair Has Called On The British
People To Change Their Minds About
Brexit
AlphaGo / YouTube / Via youtube.com
Go is an ancient Chinese game in which you place stones on a 19 by 19 board, and
capture your opponent’s stones by surrounding them. The rules are very simple, but
they give rise to a complex, subtle game.
This morning, AlphaGo, a computer designed by the Google-owned, London-based
company DeepMind, defeated Lee Sedol, the reigning Go world champion, in the fifth
game of a five-game series. AlphaGo beat Lee 4-1 overall, with Lee taking the fourth
game, when the series was already lost.
Here’s why that’s a big deal. First, Go is incredibly
complicated – millions upon millions of times more
complex than chess.
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
The False Hillsborough Claim On Paul
Nuttall’s Website Is Costing Him Votes
In The Stoke By-Election
Mark Zuckerberg Lays Out Facebook's
More Globally Values-Driven Direction
1/9
17/02/2017
New Post ▾
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
Mission Control ▾
Data ▾
More Tools ▾
Report an Issue
55,197
Edit
Post Optimizer
More
A Council Took A Baby Away From His
Parents Because Of The Dad’s
"Unorthodox" Views On Bottle Feeding
youtube.com
Amazon And Waterstones Sell Books
That Promote "Dangerous" Treatments
For Autism
“It’s sheer mathematics,” Professor Murray Shanahan, an AI researcher at Imperial
College London, told BuzzFeed News. “The number of possible board configurations
in chess, of course, is huge. But with Go, it’s enormously larger.”
In chess, there are on average about 35 to 38 moves you can make at any point.
That’s called the “branching factor”. In Go, the branching factor is about 250. In two
moves, there would be 250 times 250 possible moves, or 62,500. Three moves
would be 250 times 250 times 250, or 15,625,000. Games of Go often last for
hundreds of moves.
It’s sometimes said that in chess there are more possible games than there are atoms
in the observable universe. In Go, by one estimate, there are something like a trillion
trillion trillion trillion trillion trillion trillion more than that. To write the total number out,
you’d need to put a 1 followed by 170 zeroes. That’s why, nearly 20 years after
computers became better at chess than humans, they’ve only just caught up at Go.
A Hospital Porter Who Was Fired For
"Stealing" A Fried Egg Has Got His Job
Back
Campaigners Keep Flooding The Fake
News Inquiry With Complaints About
Right-Wing Tabloids
That means that a computer can’t just look at every
single possible move and pick the best one.
That’s called “brute force” processing. “You simply can’t use brute force for Go,” says
Shanahan. “You can’t with chess either, but you can tackle it that way a bit, use brute
force to search ahead through many, many possibilities. But with Go the number of
possible board combinations is enormously larger.”
The branching factor means that even a few turns ahead, the number of possibilities
becomes too huge for even the fastest computer to search through.
A Trump Cabinet Member's
Controversial Brother Is Setting Up A
Private Army For China, Sources Say
That means that AlphaGo’s victory isn’t simply a product of computers getting faster
and more powerful. Computers will never be powerful enough to brute-force Go.
Software is always more important than hardware.
“The general rule of thumb in these areas is that hardware counts for an enormous
amount, but software counts for more,” Eliezer Yudkowsky, an AI researcher and cofounder of the Machine Intelligence Research Institute (MIRI) in California, tells
BuzzFeed News. “If you have a choice between using software from 2016 and
hardware from 1996, or vice versa, and you want to play computer chess or Go,
choose the software every time.”
Here’s How You Fight A By-Election In
2017
More News
Now Buzzing
And that means that AlphaGo has had to use learning
techniques that are more like human intuition.
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
More Buzz
2/9
17/02/2017
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
New Post ▾ Mission Control ▾ Data ▾ More Tools ▾
Report an Issue
Human
players don’t follow every possible branch that the
game could go.55,197
They look
at the board and see patterns. “The way that human players play chess or Go or any
game like that,” says Shanahan, “is that we get to recognise what a good board
pattern looks like. There’s an intuitive feel for what’s a good strong position versus
what’s a weak one.
“Human players build that up through experience. What DeepMind have managed to
do is capture that process using so-called deep learning, so it can learn what
constitutes a good board configuration.”
Its creators fed it hundreds of thousands of top-level
Go games, and then, after it had learned from them,
let it play against itself, millions of times.
Edit
Post Optimizer
More
27 People Who Are Definitely Experts
At Marriage
Beyoncé’s Grammys Dress Had An
Interesting Detail You Probably Missed
29 Tweets About Animals That Are
Both Wholesome And Hilarious
53 Batshit Facts About Kim Kardashian
And Kanye West
Lee Sedol. Pic by Reuters
And from that huge dataset, it was able to pick out the deep rules of Go that the top
players know intuitively (but often can’t explain). It started out as a not-very-good
player, and learned from its own mistakes.
“The core of it is having one system play itself, and improve itself to a superhuman
level, without specific tweaking from its designers,” says Yudkowsky.
They’re Filming The "Love Actually"
Reunion Already And The Turtlenecks
Are Back
Unlike humans, because it learns from a vast number of games, it can’t actually learn
very much from any single one. “Each game is only a drop in the ocean of data,” says
George van den Driessche, one of the AlphaGo researchers. “It contributes only a
tiny amount to the eventual model, so attempting to incorporate our games against
Lee Sedol into our model would make no noticeable difference.”
19 Times Scottish Celebs Were
Genuinely Savage As Fuck
So its own designers probably don’t know, really, how
it works.
21 Things You'll Understand If You
Used To Be A Picky Eater
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
3/9
17/02/2017
New Post ▾
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
Mission Control ▾
Data ▾
More Tools ▾
Report an Issue
55,197
Edit
Post Optimizer
More
Trolls Are Trying To Trick People Into
Thinking This Purple Bird Facebook
Sticker Is A Secret Nazi Symbol
23 Reasons Growing Up In Scotland
Ruins You For Life
Kasparov v Deep Blue in 1997. Stan Honda / AFP / Getty Images
They understand the principles behind its learning, and its overall structure, but not
the methods it’s used to defeat Lee. That makes it entirely different to IBM’s Deep
Blue, the chess-playing computer that beat the world No 1 Garry Kasparov in 1997.
“Deep Blue was special purpose,” says Yudkowsky. “Its designers tweaked it as it
went along; people kind of understood how it worked. From the outside it looks like
the people who did AlphaGo don’t know how it works.”
10 Danny Dyer Parenting Quotes You
Need To Get In Your Nut
Shanahan agrees: “I don’t suppose anyone in DeepMind understands quite how
AlphaGo beat Lee Sedol.”
AlphaGo’s victory has come as a major shock to the
artificial intelligence community.
“People weren’t expecting computer Go to be solved for 10 years,” says Yudkowsky.
Even the AlphaGo team were shocked. Van den Driessche says: “We certainly were.
We went very quickly from ‘Let’s see how well this works’ to ‘We seem to have a very
strong player on our hands’ to ‘This player has become so strong that probably only a
world champion can find its limits’.”
While the way it learns is somewhat similar to how
humans do, there are subtle but important differences.
AlphaGo / Nature / Via nature.com
“It’s called a ‘neural network’, so that sounds very brain-like,” says Shanahan. “And
each of the little ‘neurons’ in these networks sort of resembles a neuron [nerve cell] in
the human brain.
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
4/9
17/02/2017
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
Postan
Mission Control ▾It’s Data
Toolsby
Report
an Issuein the 55,197
▾ approximation.
▾ More
▾ what
“But New
they’re
loosely
inspired
happens
human
brain, and the way the networks are connected together is loosely inspired, but that’s
all.”
Edit
Post Optimizer
More
The main difference, says Shanahan, is that when your brain performs some action –
orders your hand to swing a tennis racket, or retrieves a memory – a certain pattern
of neurons will fire. When the same pattern fires repeatedly, the connections between
those neurons get stronger, so the pattern gets fixed and the action gets easier.
AlphaGo also has patterns of connections between its neurons. But instead of its
patterns getting stronger or weaker as they fire, it looks at the outcomes it wants to
achieve, then uses an algorithm to adjust the strengths of the various connections to
best achieve them. It’s a technical-sounding difference, but it means that at a deep
level, AlphaGo thinks in a very different way to human players.
The way AlphaGo learns means that it has applications
outside playing games.
By Goban1 - Own work, Public Domain / Via commons.wikimedia.org
The methods the team has used – the deep learning and the self-improvement – can
be used in areas other than Go. “It’s a more significant milestone than chess in 1997,”
says Shanahan, “because the techniques they’ve applied to this have quite general
applications.”
Demis Hassabis, the founder of AlphaGo, has talked about medical applications –
using the deep learning techniques to create an AI that can help doctors make
diagnoses, for instance. The AlphaGo team has published its research in the journal
Nature.
Artificial intelligence researchers say that this is a
“sign of how far AI has come”.
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
5/9
17/02/2017
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
New Postlike
Mission Control
More
Tools ▾ that
Report
anGarry
Issue Kasparov
▾ AlphaGo
▾ Data
▾ the
55,197
Computers
and Deep
Blue,
machine
beat
in
1997, are artificial intelligences, but they are intelligent in a highly specific way. The
goal of some researchers is to develop an all-purpose intelligence, capable of solving
all kinds of problems, as human brains are. That goal is known as “artificial general
intelligence” (AGI).
Edit
Post Optimizer
More
AlphaGo’s victory is a step along that road, says Shanahan, because of the
generalisable way that it learns. He thinks that the techniques the AlphaGo team
have used are the most promising route to AGI.
It’s also a demonstration of just how powerful AI is now, and how quickly the field is
moving, says Yudkowsky. “I’m not saying that AlphaGo in and of itself is going to lead
to robots in 10 years,” he says. “We just don’t know about that. But AlphaGo is a sign
of how far AI has come.”
Although they warn there’s a long way from here to
true, human-level intelligence.
“Go is a tremendously complex game,” says Shanahan. “But the everyday world is
very, very much more complex.” After all, he says, the real world contains Go, and
chess, and driving cars. “The space of possible moves in the real world is truly huge.
For example, the space of possible moves includes becoming a champion Go player.”
He thinks that AGI is extremely unlikely in the next 10 years, but possible by 2050
and pretty likely by the end of the century. “This is not just a fantasy,” he says. “We’re
talking about something that might actually affect our children, if not ourselves.” Van
den Driessche agrees, saying this is a “major milestone”, but warning that humanlevel AGI is “still decades away”.
“Nobody knows how long the road is [to AGI],” says Yudkowsky. “But we’re pretty
sure there’s a long way left.”
Still, they say, AlphaGo has shown that surprises
happen. And AGI has the potential to be a big enough
problem that it’s worth paying attention now.
Handout / Getty Images
“People think that because it’s not right around the corner, that means there’s nothing
to worry about,” says Yudkowsky. “People cannot pry these ideas apart.”
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
6/9
17/02/2017
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
New Post
Mission Control
Tools
Report an Issue
▾ Data
▾ Morethe
▾ somewhat
55,197
He thinks
that▾ humanity
has already
dropped
ball
on thinking
about
what will happen when we build a machine that’s as smart as us. The risk, he says, is
that an intelligent machine that can rewrite its own code could improve itself very
rapidly, and become far more intelligent than us. If it’s not built with humanity’s best
interests at its core, it could end badly for us.
Edit
Post Optimizer
More
“We should have been thinking about this 30 years ago.” He and the philosopher
Nick Bostrom have been thinking for a while about how to minimise the risks AI
poses to humanity, but, he says, we should be much further along that road already.
“We should be on the technical stuff, the nitty gritty.”
“The scenarios that are discussed by Bostrom and Yudkowksy are legitimate and
need to be taken seriously,” says Shanahan. “When we get to the point of building
AGI I think we will quite quickly get to a superintelligence. We need to be absolutely
sure it’s safe.”
And AlphaGo has shown, too, that there’s no reason to
think that any future artificial intelligences need to be
anything like us.
“In 1997, Garry Kasparov said that he sensed a kind of alien intelligence on the other
side of the board,” says Shanahan. “And I’ve noticed in the commentary on AlphaGo
that some of the commentators thought that it had made some weak moves earlier
on, but now they’re not sure if it wasn’t some clever plan for the end game.
Sometimes an AI might solve things in a way that’s quite different from how we might
tackle things.”
A future AGI, in a much more dramatic way, might not be “human” either. There’s no
reason to think it would have our desires, or even things that we’d call desires at all,
says Shanahan. Bostrom has pointed out that there’s no reason to think it would even
be conscious.
What’s not clear, yet, is just how good AlphaGo is.
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
7/9
17/02/2017
New Post ▾
This Is Why A Computer Winning At Go Is Such A Big Deal - BuzzFeed News
Mission Control ▾
Data ▾
More Tools ▾
Report an Issue
55,197
Edit
Post Optimizer
More
By Katpatuka. Wikimedia, CC BY-SA 3.0 / Via en.wikipedia.org
Obviously it’s capable of beating the best human in the world, but how much better is
it? Would it win every series?
“The games looked even,” says Yudkowsky, “but is that because AlphaGo is an alien
intelligence, or because they’re actually close?” He says that in the fourth game,
when it lost to Lee, some of its decisions looked more like mistakes, but there were
some genuinely weird, superhuman moves in the second and third games – moves
that looked wrong at the time, but which set up winning positions later in the game.
And it’s worth remembering that there were two players in the series, and that Lee
Sedol played extraordinarily well. “We’re all honoured to have had the privilege to pit
our creation against such a distinguished and capable opponent,” says van den
Driessche.
Tom Chivers is a science writer for BuzzFeed and is based in London.
Contact Tom Chivers at [email protected].
Got a confidential tip? Submit it here.
More ▾
NEXT ON NEWS›
Before They March:
Three Women On Why
They’re Going...
https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.ogvjjg7J#.lr1OOnxX
8/9