HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Match 5 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo

DeepMind · Youtube · 21 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention DeepMind's video "Match 5 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo".
Youtube Summary
Watch DeepMind's program AlphaGo take on the legendary Lee Sedol (9-dan pro), the top Go player of the past decade, in a $1M 5-game challenge match in Seoul. This is the livestream for Match 5 to be played on: 15th March 13:00 KST (local), 04:00 GMT; note for US viewers this is the day before on: 14th March 21:00 PT, 00:00 ET.

In October 2015, AlphaGo became the first computer program ever to beat a professional Go player by winning 5-0 against the reigning 3-times European Champion Fan Hui (2-dan pro). That work was featured in a front cover article in the science journal Nature in January 2016.

Match commentary by Michael Redmond (9-dan pro) and Chris Garlock.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This was probably the closest game in the series. Livestream: https://www.youtube.com/watch?v=mzpW10DPHeQ

A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level. Now it seems that we've already surpassed that point. What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

In game four, we saw Lee Sedol make a brilliant play, and AlphaGo make a critical mistake (typical of monte carlo-trained algorithms) following it. There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human: games one through three already featured extraordinarily strong (and innovative) play on part of AlphaGo.

Previous Discussions:

Game 4: https://news.ycombinator.com/item?id=11276798

Game 3: https://news.ycombinator.com/item?id=11271816

Game 2: https://news.ycombinator.com/item?id=11257928

Game 1: https://news.ycombinator.com/item?id=11250871

wheresmypasswd
> What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

While growth may be accelerating, this is simply the result of one big paradigm shift in deep learning/NNs. Once we've learned to milk it for all its worth, we'll have to wait for the next epiphany.

HaikuEU
Could you care to explain what was that big paradigm shift ?
bla2
This four-part series explains it well: http://www.andreykurenkov.com/writing/a-brief-history-of-neu...
hmate9
Milking neural networks out completely is pretty much AI as depicted in the movies. If we can milk it completely there probably isn't a need for the next epiphany.
swombat
Or, at the very least, the next epiphany need not be human-designed. Just train a neural network in the art of creating AI paradigms and implementations that can do general purpose AI. Once that's "milked", the era of human technological evolution is finished.
foota
That sounds like a bad idea.
None
None
swombat
It's the core idea of AI, the primary reason why it is suspected that developing strong AI will inevitably lead to the end of the human era of evolution.
YeGoblynQueenne
I don't want to be mean, but that's like saying you'll train a magic neural net with the mystical flavour of unicorn tears and then the era of making rainbows out of them will be finished. Or something.

I mean, come on- "the art of creating AI paradigms"? What is that even? You're going to find data on this, where, and train on it, how, exactly?

Sorry to take this out on you but the level of hand-waving and magical thinking is reaching critical mass lately, and it's starting to obscure the significance of the AlphaGo achievement.

Edit: not to mention, the crazy hype surrounding ANNs in the popular press (not least because it's the subject of SF stories, like someone notes above) risks killing nascent ideas and technologies that may well have the potential to be the next big breakthrough. If we end up to the point where everyone thinks all our AI problems are solved, if we just throw a few more neural layers to them, then we're in trouble. Hint: because they're not.

swombat
I totally see your point and my purpose is definitely not to be alarmist and sound the alarm that skynet is about to come out of AlphaGo or some other equivalent neural net. But I think the opposite attitude is also false.

As others have pointed out, we don't really know how the brain works. Neural nets represent one of our best attempts to model brains. Whether or not it's good enough to create real intelligence is completely unknown. Maybe it is, maybe it's not.

Intelligence appears to be an emergent property and we don't know the circumstances under which it emerges. It could come out of a neural network. Or maybe it could not. The only way we'll find out is by trying to make it happen.

Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

This is Hacker News, not a mass newspaper, so I think we can take the more nuanced and complex view here.

TheOtherHobbes
We don't really know how AI works either. NNs (for example) do stuff, and sometimes it's hard to see why.

>Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

Not really. Right now it's taking the position that there is no practical path that anyone can imagine from a go-bot, which is working in a very restricted problem space, to a magical self-improving AI-squared god-bot, which would be working in a problem space with a completely unknown shape, boundaries, and inner properties.

Meta-AI isn't even a thing yet. There are some obvious things that could be tried - like trying to evolve a god-bot out of a gigantic pre-Cambrian soup of micro-bots where each bot is a variation on one of the many possible AI implementations - but at the moment basic AI is too resource intensive to make those kinds of experiments a possibility.

And there's no guarantee anything we can think of today will work.

YeGoblynQueenne
>> Neural nets represent one of our best attempts to model brains.

See now that's one of the misconceptions. ANNs are not modelled on the brain, not anymore and not ever since the poor single-layer Perceptron which itself was modelled after an early model of neuronal activation. What ANNs really are is algorithms for optimising systems of functions. And that includes things like Support Vector Machines and Radial Basis Function networks that don't even fit in the usual multi-layer network diagram particularly well.

It's unfortunate that this sort of language and imagery is still used abundantly, by people who should know better no less, but I guess "it's an artificial brain" sounds more magical than "it's function optimisation". You shouldn't let it mislead you though.

>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

I don't agree. It's a subject that's informed by a solid understanding of the fundamental concepts - function optimisation, again. There's uncertainty because there's theoretical limits that are hard to test, frex the fact that multi-layer perceptrons with three neural layers can learn any function given a sufficient number of inputs, or on the opposite side, that non-finite languages are _not_ learnable in the limit (not ANN-specific but limiting what any algorithm can learn) etc. But the arguments on either side are, well, arguments. Nobody is being "blind". People defend their ideas, is all.

dontreact
Convolutional neural nets are the most accurate model of the ventral stream, numerically speaking. See work by Yamins, DiCarlo etc.
_yosefk
You're basically saying that there's no task (including passing the Turing test, programming web apps, etc.) which requires intelligence and is best tackled with either something else than a neural network or with NN combined with something else. I think it's a pretty bold statement which is really hard to back up by anything but a hunch.
swombat
Our current assertion is that neural networks basically replicate the brain's function, so our current understanding of this paradigm is that "milking neural networks" is going to match or exceed human general purpose intelligence.

I believe hmate9 is correct. If this paradigm is exploited to the full, unless we've missed something fundamental about how the brain works, we don't need to bother ourselves with inventing the next paradigm (of which there will no doubt be many), because one of the results of the current paradigm will be either an AGI (Artificial General Intelligence) that runs faster and better than human intelligence, or, more likely, an ASI (Artificial Super Intelligence). Either of those is more capable than we are for the purpose of inventing the next paradigm.

fsloth
"unless we've missed something fundamental about how the brain works"

But we don't know how the brain works. I think you extrapolate too far. Just because a machine learning technique is inspired by our squishy connectome it does not mean it's anything like it.

I'm willing to bet there are isomorphisms of dynamics between an organic brain and a neural net programmed on silicon but as far as I know, there are still none found - or at least none are named specifically (please correct me).

ska

   Our current assertion is that neural networks basically replicate the brain's function
No. Just, no. This was never really a claim made by people who understood neural nets (there was a little perceptron confusion in the 60s iirc).
SixSigma
> Our current assertion is that neural networks basically replicate the brain's function

come on, that's hyperbole

argonaut
No deep learning researcher believes neural networks "basically replicate" the brain's function. Neural nets do a ton of things brains don't do (nobody believes the brain is doing stochastic gradient descent on a million data points in mini-batches). Brains also do a billion things that neural nets don't do. I've never even taken a neuroscience class, and I can think of the following: synaptic gaps, neurotransmitters, the concept of time, theta oscillations, all or nothing action potentials, Schwann cells.

You have missed something fundamental about how the brain works. Namely, neuroscientists don't really know how it works. Neuroscientists do not fully understand how neurons in our brain learn.

According to Andrew Ng (https://www.quora.com/What-does-Andrew-Ng-think-about-Deep-L...):

"Because we fundamentally don't know how the brain works, attempts to blindly replicate what little we know in a computer also has not resulted in particularly useful AI systems. Instead, the most effective deep learning work today has made its progress by drawing from CS and engineering principles and at most a touch of biological inspiration, rather than try to blindly copy biology.

Concretely, if you hear someone say "The brain does X. My system also does X. Thus we're on a path to building the brain," my advice is to run away!"

hmate9
You are right, we do not know everything about the brain. Not even close. But neural networks are modelled on what we do know of the brain. And "milking" neural networks completely means we have created an artificial brain.
eru
Did you just ignore the first few lines of argonaut's comment?

Recently, we also introduced activation functions in our neural nets, like rectified linear and maxout just for their nice mathematical properties without any regards to biological plausibility. And they do work better than what we had before.

None
None
None
None
JulianMorrison
But that's what technological growth is. A series of epiphanies, building on what came before.
msbarnett
Yes, but an epiphany is not evidence of an accelerating rate of epiphanies, nor evidence that such epiphanies will continue apace into the future.
JulianMorrison
You can look at the past for that, although obviously it doesn't predict the future. But it ought to be a priori obvious, at least, that the more you know (as a species), the more surface area of knowledge you have to synthesize into an extending step beyond the known.
ska
You could look at the past, but that isn't what the claim did.

In fact looking at the rate of change in applications over an "epiphany" period is probably the least useful estimate of progress & rate of change in progress.

seanwilson
> In game four, we saw Lee Sedol make a brilliant play, and AlphaGo make a critical mistake (typical of monte carlo-trained algorithms) following it.

Can you explain why this is typical? What can be done against this to strengthen the algorithm?

rymate1234
I can't remember where I read this, but one theory was that the move Lee Sedol made was thought to be unlikely by AlphaGo, and so didn't explore down that path.

When Lee Sedol made the move, the AI was in unknown territory as it hadn't explored down that avenue.

seanwilson
> When Lee Sedol made the move, the AI was in unknown territory as it hadn't explored down that avenue.

Sounds similar to what a human would do then: you wouldn't spend much time simulating in your head what would happen if your opponent made a very atypical move or a move that would seem very bad at first thought.

kqr
That's exactly it. The difference, as far as I have understood it, is that there was a similar move that is typical, but in that particular situation, pretty simple reasoning (of the highly abstract "if this then that so this must lead to that" sense) leads a human to conclude that this version of the move is superior.

So while atypical in the sense of "occurring infrequently", it was not a difficult move to find for a player of that level – all the pro commentators saw it pretty much right away.

This might be the one weakness of AlphaGo, which is interesting.

dfan
David Silver said at the beginning of the broadcast of game 5 that AlphaGo's policy network had given Lee Sedol's move 78 only a 1 in 10,000 chance of occurring.
dannysu
It seems that AlphaGo needs better time management skills. Not sure how that can be added. Michael Redmond mentioned that if a human player sees an unexpected move, he/she would just take all the time needed to read out the moves. AlphaGo seems to make speedy decision even after unexpected moves.
thomasahle
Yes, that's how modern chess engines manage time. If the score suddenly, drastically changes during search, they give themselves much more time.

In all of these games, AlphaGo used close to a constant amount of time per move, while Lee's varied a lot.

Apparently they only recently added a neural net for time management. Seems it is either not the best approach, or just not yet well trained.

ChuckMcM
In an odd way, it makes me more optimistic about fusion power plants in my lifetime. The reality is that we work on these advances but are terrible at predicting when we will achieve them, and then one day we find we have arrived.

That AlphaGo can play at this level suggests that similar techniques could help other parts of the infrastructure (like air traffic control) and that would also positively impact the quality of life for a many air passengers every year.

eru
Fusion power is neat, but not really necessary.

Fusion would have similar political problems to fission; and the economics aren't much improved either.

Perhaps if we ever ran out of fissionable material, fusion would become economic.

lyle_nel
What political problems might that be?
eru
All the anti-nucular protestors? Or eg whatever made Angela Merkel turn off the German reactors after a reactor of a very different design in a very different set of circumstances in Japan broke.

Fusion is just yet another nuclear reactor design as far as politics might be concerned.

lyle_nel
Ah is see. Although I would not put it past people to protest nuclear fusion, it would be strange indeed, since nuclear fusion does not produce the same kind of radioactive waste(shorter half-life) as alternative nuclear technologies.
gambler
There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human

No doubt? Seriously? What kind of knowledge do you have to make such statements? There are plenty of examples where technology has rapidly advanced to some remarkable level, but then almost completely plateaued. For example, space travel or Tesla's work on applications of electromagnetism. Heck, even other areas of AI research.

I really don't see why people here readily assume that this particular approach to computers playing Go is easily improvable. Neither do I see why everyone assumes there will be no discoveries of anti-AI strategies that will work well against it.

With neural networks involved, it's hard to say. And all we have so far is information about about, what, 15 games? Some of which were won by people. Mind you, those people never played AlphaGo before, while the bot benefited from a myriad of training samples, as well as from Go expertise of some of its creators.

I'm also tired of all the statements about "accelerating progress". It's not like all the AI research of the past was useless until DNNs came along. That's the narrative I often get from the media, but it misrepresents the history of the field. There was no shortage of working ML/AI algorithms in the past decades. The main problem was always at applying them to real-world things in useful ways. And in that sense, AlphaGo isn't much different from Deep Blue.

One big shift in the field is that these days a lot of AI research is done by corporations rather than universities. Corporations are much better at selling whatever they do as "useful", which isn't such a good thing in the long run. We're redefining progress as we go and moving goalposts for every new development.

mikeash
It would be a bizarre coincidence for the technology to advance so quickly and then stop right at the level of the best human players. That's especially so when there are so many big, lucrative applications for the underlying technology.
carleverett
I don't know if it would be that bizarre. Once AlphaGo can beat the best humans on Earth, what motivation is there to keep improving it? Wasn't that the goal of the project?
jacquesm
AlphaGo is still a monstrosity in terms of the hardware it requires. Improvements in AlphaGo will be reflected in the fact that it or something like it will soon sit on a tiny little computer near you. See also: what happened after the chess world champion lost to a computer.
mikeash
Advances in deep learning in general should apply here, and there's a big motivation to keep improving that. Also, Go is popular enough that it should experience the same sort of commoditization drive that advanced Chess engines did, where Deep Blue level play went from being on a supercomputer to being on a smartphone. Then, since this approach scales up with more computing power, running a hypothetical future smartphone-Go engine on a big cluster like AlphaGo has here should put it way beyond the human level.
bdamm
A critical component of AlphaGo's success is the massive training database comprising of the entire history of documented professional Go games. So while AlphaGo may play the game with an inhuman clarity of reading, it is less clear that it can strategically out-match professionals in the long term who may have an opportunity to find and exploit weaknesses in AlphaGo's process. Lee Sedol had that opportunity, of course, and he was not able to defeat AlphaGo. And how will AlphaGo improve, now that there are no stronger players from whom to train?

Will AlphaGo show us better strategies that have never been done before? In other words, can AlphaGo exhibit creative genius? It may have, but that's rather hard for us to observe.

In any case, I am looking forward to future AI vs AI games. It is still fundamentally a human endeavor.

kllrnohj
Most of AlphaGo's learning came from self-play. Hence how it was able to vastly exceed the skill level of its initial training data which were amateur, not professional, games.
PeterisP
Can't find the reference now, but in recent interviews the AlphaGo team claimed that one of their next steps would involve training a system without that training database, from scratch (simply by playing lots of games against different versions of itself), and that they estimate that it would be just a bit weaker.
YeGoblynQueenne
>> Corporations are much better at selling whatever they do as "useful", which isn't such a good thing in the long run.

Yep. There's a grave risk that funding to AI research ends up being slashed just as badly as in the last AI winter, if people start thinking that Google has eaten AI researchers' lunch with its networks and there's no point in trying anything else.

Incidentally, Google would be the first to pay the price of that, since they rely on a steady stream of PhDs to do the real research for them but now I'm just being mean. The point is, we overhype the goose that lays the golden eggs, we run out of eggs.

Ensorceled
I like that analogy; we have a perfectly good goose, laying nice, valuable eggs and people keep shouting "they're gold!".
dwaltrip
The deepmind team has mentioned that the technique they used to improve AlphaGo's play from October 2015 (when it beat the European champion, who was ranked #600 at that time) until now has not reached the point of diminishing returns yet.

Many go professionals, after reviewing the 2 sets of games, have stated that is quite clear how much AlphaGo has improved in those 4 months.

tim333
Well, little doubt. When did any technology suddenly stop improving when it reached human levels?
davnn
> There are plenty of examples where technology has rapidly advanced to some remarkable level, but then almost completely plateaued.

And that's why you assume that it does not skyrocket in the future? Predicting the future is hard either way, ask a turkey before he gets his head chopped off.

> I'm also tired of all the statements about "accelerating progress". It's not like all the AI research of the past was useless until DNNs came along.

It's not that it was useless, but AI is improving as any other field is, some say faster than most other fields, and it's becoming more useful from day to day.

My guess would also be that "with further refinement, we'll soon see AI play Go at a level well beyond human", but it's just a guess.

mda
I have almost no doubt. A few months ago they have beaten a weaker pro, and judging from the improvements in such a short time I am fairly certain it will be unbeatable in a few months, if they continue working on it.
kllrnohj
> No doubt? Seriously? What kind of knowledge do you have to make such statements?

Uh, click the link in the OP and find out? AI just beat a top 5 human professional 4-1. Go rankings put that AI at #2 in the world.

If AlphaGo improves at all at this point it will have achieved a level well beyond any human.

It is incredibly, ludicrously unlikely that AlphaGo has achieved the absolute peak of its design given that it went from an elo of ~2900 to ~3600 in just a few months.

z0r
To be fair, I think a larger sample size of human vs computer games are needed. Let the top pros train with the computers and we can measure what level is beyond any human.
Retric
Being the best ranked player != playing well beyond humans. When the AI can play 1,000 games and never lose that's well beyond people.

Granted, chess AI is basically at that point right now. But, go AI has a ways to go.

run4yourlives2
Given the leaps of progress made between this series of games and the previous series in only a few months, I'd expect "never lose" will become a recognized reality in about a year.
Retric
Possibly, it's not clear if AlphaGo is playing better or simply approaching the game differently. Game five was close and AlphaGo seemed to mostly win due to time considerations.

PS: Honestly, it might be a year or a decade, but I suspect there is plenty of headroom to drastically surpass human play.

CamperBob2
When AlphaGo does lose, it seems to happen when outright bugs cause it to make moves that are readily recognizable as mistakes. It doesn't seem to happen because it's not quite "smart" enough, or because its underlying algorithms are fundamentally flawed.

That's a big difference. Bugs can be identified and fixed. By the time AlphaGo faces another top professional (Ke Jie?) we can safely assume that whatever went wrong in Game 4 won't happen again.

Consider how much stronger the system has become in the few months since the match against Fan Hui. Another advance like that will place it far beyond the reach of anything humans will ever be able to compete with.

ESRogs
> When AlphaGo does lose, it seems to happen when outright bugs cause it to make moves that are readily recognizable as mistakes

I'm not sure this is true. It made the wrong move at move 79 in game 4, but I'm not sure that should be considered an obvious mistake.

My understanding is that the moves that people said were most obviously mistakes later in the game were a result of it being behind (and desperately trying to swing the lead back in its favor), rather than a cause.

colllectorof
Go rankings put that AI at #2 in the world.

Go rankings weren't designed for ML algorithms, which can have high-level deficiencies and behave erratically under certain conditions.

hosh
There are actually a lot of room for improvement. Just some of the things:

(1) Better timing control. Maybe when the probability of winning reaches below say, 50% but has not hit the losing threshold, spend extra time.

(2) Introducing "anti-fragility". Maybe even train the net asymmetrically to play from losing positions to gain more experience with that.

(3) Debug and find out why it plays what looks like non-sense forcing moves when it thinks it is behind (assuming that is what is actually happening).

There's another interesting thing. Among the Go community, there might have been initially some misplaced pride. But the pros and the community very quickly changed their attitude about AlphaGo (as they have in the past when something that seems to not work, yet proves itself in games). They are seeing an opportunity for the advancement of Go as a game. I think a lot of the pros are very curious, even excited, and might be knocking on Google's doors to try to get access to AlphaGo.

NotUsingLinux
this reads as the professional go world couldn't wait for this (Alphago) to arrive to find new ways of play new moves.
aquadrop
> There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human

Will we though? AlphaGo trains on human games, so can it go well beyond that level? Will it train on its own games?

krig
It is already mainly training by playing against itself:

https://googleblog.blogspot.se/2016/01/alphago-machine-learn...

> To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.

aquadrop
It's still based on human games. It plays itself but the way it plays was inherited from human. I wonder if there is some fundamental barrier to what you can reach with reinforcement depending on your base.
relic
It is based on human games until it can explore well enough to sufficiently break away from local optimums.
aflinik
Having it learn on human games was just a way of speeding up the initialization process before running reinforcement learning, it didn't limit the state tree that was being searched later on.
StreamBright
It already went beyond human level, look for Go players commenting on the game, citing that they would have never thought about steps that the AI made. In a sense it brought new strategies to the table that humans can learn and apply in human vs human games.
aquadrop
Yes, but how far can it go beyond human level? Will it be slight margin, so it can win 4-1, or it will soon became able to beat top players with 1,2,10 stones handicap?
Cookingboy
Some high level pros have stated that they would need a 4 stone handicap to beat the "perfect player", i,e "God of Go", so that would probably put a skill ceiling on this.
None
None
johnloeber
AlphaGo was actually only trained on publicly available amateur (that is, strong amateur) games. After that, AlphaGo was trained by running a huge number of games against itself (reinforcement learning).

A priori, this makes sense: you don't need to train on humans to get a better understanding of the game tree. (See any number of other AIs that have learned to play games from scratch, given nothing but an optimization function.)

aquadrop
Yes, but is it known if there's some limit to what you can reach doing this? I mean, if they trained it on games of bad amateur players instead of good, and then played itself, will it keep improving continuously to the current level or hit some barrier?
relic
There is always a risk of getting stuck in a local maxima, thinking you've found an optimal way of playing, so you'd need more data that presents different strategies, I'd think.
johnloeber
That's why they only initially trained it on human players, and afterwards, they trained it on itself. I would guess (strongly emphasize: guess) that they trained it on humans just to set initial parameters and to give it an overview of the structure and common techniques. It would've probably been possible to train AlphaGo on itself from scratch, but it would've taken much longer -- amateur play provides a useful shortcut.

I don't think there is a theoretical upper limit on this kind of learning. If you do it sufficiently broadly, you will continuously improve your model over time. I suppose it depends to what extent you're willing to explicitly explore the game tree itself.

colllectorof
A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level.

Any sources for this statement? I've seen it repeated over and over again, but without any specific examples of who those experts were or what they said.

mcv
> There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human

Why is there no doubt? I strongly doubt there even exists a go level that's well beyond human. There is hypothetical perfect play of course, but there is absolutely no way to guarantee perfect play. And while I have no way to judge, I've heard that 9p players may not be all that far removed from perfect play. One legendary player once boasted that if he had black (no komi, I assume), he would beat God (who of course plays perfect go).

There is of course no way to know if that's true or gross overconfidence, but it's certainly possible that there's not all that much room left beyond the level of 9p players.

AlphaGo will no doubt improve, and reduce the number of slips like his move 79 in the 4th game, but it's never going to be perfect, and there's always the chance that it will miss an unexpected threat.

TwoBit
Given the crushing that AlphaGo did, I don't believe your statement about humans having near perfect play.
goldbrick
Can you quantify "crushing"?
eru
4 - 1.
mcv
Not all humans, obviously, but 9p players really are far, far better than other players. And there's another 9p who has won 8 out of 10 matches against Lee Sedol, so there's nothing superhuman about a 4-1 result at that level.

I'm really just objecting to the description of this as "beyond human". Yes, it's good, and it's many orders of magnitude beyond my level, but so are Lee Sedol and other 9p players.

baddox
You could always argue what "a level well beyond humans" means, but I'd say if a computer can consistently dominate the best human players that would count.
cgearhart
>A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level.

These kinds of predictions are almost always useless. You can always find people who say it'll take n years before x happens, but no one can predict which approaches will work, and how much improvement they'll confer.

> What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

Appreciate it for what it is - an historic achievement for AI & ML - and stop trying to attach broader significance to it.

jmathes
> Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

Advancement faster than predictions does mean accelerating advancement, coupled with the (true) fact that people's predictions tend to assume a constant rate of advancement [citation needed]. Actually, all you'd need to show accelerating advancement is a trend of conservative predictions and the fact that these predictions assume a non-decreasing rate of advancement; if we're predicting accelerating advancement and still underestimating its rate, advancement must still be accelerating.

It even seems like this latter case is where we're at, since people who assume an accelerating rate of advancement see to assume that the rate is (loosely) quadratic. However, given that the rate of advancement tends to be based on the current level of advancement (a fair approximation, since so many advancements themselves help with research and development), we should expect it to be exponential. That's what exponential means.

However, the reality seems like it might be even faster than exponential. This is what the singularitarians think. When you plot humanity's advancements using whatever definition you like, look at the length of time between them to approximate rate, and then try to fit this rate to a regression, it tends to fit regressions with vertical asymptotes.

ergothus
> These kinds of predictions are almost always useless. You can always find people who say it'll take n years before x happens, but no one can predict which approaches will work, and how much improvement they'll confer.

True, but it's pretty refreshing to have a prediction about AI being N years from something that is wrong in the OTHER direction.

Contrary to your point about 'appreciate it for what it is', there is ONE lesson I hope people take from it: You can't assume AI progression always remains in the future.

A general cycle I've seen repeated over and over:

* sci-fi/futurists make a bunch of predictions * some subset of those predictions are shown to be plausible * general society ignores those possibilities * an advancement happens with general societal implications * society freaks out

Whether it's cloning (ala Dolly the Sheep, where people demonstrated zero understanding of what genetic replication was e.g. a genetic clone isn't "you") or self-driving cars (After decades of laughing at the idea because "who would you sue?", suddenly society is scrambling to adjust because they never wanted to think past treating that question as academic), or everyone having an internet-connected phone in their pocket (see encryption wars...again), or the existence of a bunch of connected computers with a wealth of knowledge available, society has always done little to avoid knee-jerk reactions.

Now we have AI (still a long way off from AGI, granted) demonstrating not only can it do things we thought weren't going to happen soon (see: Siri/Echo/Cortana/etc), but breaking a major milestone sooner than most anyone thought. We've been told for a long time that because of typical technology patterns, we should expect that the jump from "wow" to "WOW!" will happen pretty quickly. We've got big thinkers warning of the complications/dangers of AI for a long time.

And to date, AI has only been a big joke to society, or the villain of B-grade movies. It'd be nice, if just once, society at least gave SOME thought to the implications a little in advance.

I don't know when an AGI will occur - years, decades, centuries - but I'm willing to bet it takes general society by surprise and causes a lot of people to freak out.

sergiosgc
> > What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

> What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

It's not a non-sequitur, but there is an implicit assumption you perhaps missed. The assumption is that the human failure to predict this AI advance is caused by an evolution curve with order higher than linear. You see, humans are amazingly good at predicting linear change. We are actually quite good at predicting x² changes (frisbee catching). Higher than that, we are useless. Even at x², we fail in some scenarios (braking distance at unusual speeds, like 250km/h on the autobahn for example).

The fact that it will maintain its pace is an unfounded assumption. However, assuming that the pace will slow is as unfounded. All in all, I'd guess it is safest to assume tech will evolve as it has in the last 5000 years.

That would be an exponential evolution curve.

angstrom
These kind of statements are only valuable to me if they are followed by "And these are the challenges that need to be overcome which are being worked on".

Otherwise it's a blanket retort. It's like saying "There are lots of X".

Ok, name 7. If you get stuck after 2 or 3 you're full of it.

kamaal
>>You can always find people who say it'll take n years before x happens

Interesting, people seem to be saying the same about self driving cars.

lefnire
You sound like the kinda person who says "AI will never drive," "AI will never play Go." True there's a lot of hype, which ML experts are concerned may lead to another burst & winter. On the flip-side there's a lot of curmudgeonly nay-sayers such as yourself at which ML experts roll their eyes and forge ahead. What I find is both extremes don't understand ML, they're just repeating their peers. ML is big, and it's gonna do big things. Not "only Go", not "take over the world"; somewhere in between.
cgearhart
I'm actually very optimistic about the state of AI and ML lately. The difference is that I don't anthropomorphize the machines or ascribe human values to their behavior. I absolutely believe AI will drive (and save lives); I have always believed that AI will play Go; I believe that AI will grow to match and surpass humans in many things we assume that only humans can do. Humans aren't perfect, but that doesn't mean that machines who outperform us are perfect either.

AlphaGo plays Go. It probably doesn't play Go like a human (because a human probably can't do what it does), but that's OK because it also appears to be better than humans. AlphaGo is interesting not because it has done something impossible, but because it has proven possible a few novel ideas that could find other interesting applications, and adds another notch to the belt of a few other tried and tested techniques.

johnloeber
> These kinds of predictions are almost always useless.

Let's rephrase. For a long time, the expert consensus regarding Go was that it was extremely difficult to write strongly-performing AI for. From the AlphaGo Paper: Go presents "difficult decision-making tasks; an intractable search space; and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function."

For many years, the state-of-the-art Go AI stagnated or grew very slowly, reaching at most the amateur dan level. AlphaGo presents a huge and surprising leap.

> Continued advancement doesn't mean that it is accelerating

Over constant time increases, AI is tackling problems that appear exponentially more difficult. In particular, see Checkers (early '90s) vs Chess ('97) vs Go ('16). The human advantage has generally been understood to be the breadth of the game tree, nearly equivalent to the complexity of the game.

If we let x be the maximum complexity of a task at which AI performs as well as a human, then I would argue that x has been growing at an accelerating pace over the past few decades.

xigency
> AI is tackling problems that appear exponentially more difficult.

The hardest AI problems are the ones that involve multiple disciplines in deep ways. Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

There might be some cases where this is possible, and some cases are bound to fail.

Those are the kind of difficult problems in AI, which combine knowledge, understanding, thought, intuition, inspiration, and perspiration - or demand invention. We would be lucky to make linear progress in this area let alone exponential growth.

I think there's certainly an impression of exponential progress in AI in popular culture, but the search space is greater than factorial in size, and I think hackers should know that.

jrock08
While I understand what you are getting at here, basically, this is still just a complete information game, and didn't solve AI. You are drastically understating the complexity of Go. It isn't actually possible to evaluate a significant fraction of the state tree in the early mid game because the branching factor is roughly 300. The major advance of AlphaGo is a reasonable state scoring function using deep nets.

Unless you have or are a PhD student in AI who has kept up with the current deep net literature I assure you that the whole of Alphago will be unintuitive to you. However, if you were an AI PhD student, you likely wouldn't be so dismissive about this achievement.

None
None
eru
> The major advance of AlphaGo is a reasonable state scoring function using deep nets.

That and the policy network to prune the branching factor.

v64
> Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

I would consider it a breakthrough if we could get human beings to do this at a decent rate :)

kamaal
Here's ia a top tier human intelligence problem: Given a requirement provide a accurate English description of a program.
Tloewald
Even harder and more common problem -- given code, give a plain English description of what it is intended to do, and describe any shortcomings of the implementation.
codeulike
Yeah e.g. you could get it to check whether it could go into an infinite loop.

Oh wait .... https://en.wikipedia.org/wiki/Halting_problem

Retra
You could for all practical purposes. The Halting problem only generally applies when you're considering all possible programs, but you really only need consider the well-written ones, because then you can filter out the poorly written ones.
dmoy
Wait what is the plan to brute force go? The search space is beyond immense...
None
None
None
None
the_af
> To be fair, in terms of the complexity of rules, checkers is easier to understand than go which is easier to understand than chess. And honestly, go seems like the kind of brute-force simple, parallel problem that we can solve now without too much programming effort

Your intuition is mistaken. Go is indeed "easier to understand" than Chess in terms of its rules, but it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

I don't think the achievement of AlphaGo is solely due to increased processing power, otherwise why did people think Go was such a hard problem?

None
None
xigency
Sure.
Retric
The problem with Go was evaluating leaf nodes. Sure, you could quickly innumerate every possible position 6 moves out, but accurately deciding if a position 1 is better than position 2-2 billion is a really hard problem.

In that respect chess is a much simpler problem as you remove material from the board, prefer some locations over others etc. Where go is generally going to have the same number of pieces on each board and it's all about balancing local and board wide gains.

nilkn
> it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

Are human champions not subject to those same difficulties of the game, though? When you're pitting the AI against another player who's also held back by the large branching factor of the search tree, then how relevant really is that branching factor anyway in the grand scheme of things? A lot of people talk about Go's search space as if human players magically aren't affected by it too. And the goal here was merely to outplay a human, not to find the perfect solution to the game in general.

(These are honest questions -- I am not an AI researcher of any kind.)

xom
quoting https://news.ycombinator.com/item?id=10954918 :

> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.

> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.

the_af
This is a great question!

Sadly, I'm neither an AI researcher nor a Go player; I think I've played less than 10 games. I don't know if we truly understand how great Go players play. About 10 years ago, when I was interested in Go computer players, I read a paper (I can't remember the title, unfortunately) that claimed that the greatest Go players cannot explain why they play the way the do, and frequently mention their use of intuition. If this is true, then we don't know how a human plays. Maybe there is a different thought process which doesn't involve backtracking a tree.

jdietrich
Go players rely heavily on pattern recognition and heuristics, something we know humans to be exceptionally good at.

For example, go players habitually think in terms of "shape"[1]. Good shape is neither too dense (inefficiently surrounding territory) or too loose (making the stones vulnerable to capture). Strong players intuitively see good shape without conscious effort.

Go players will often talk about "counting" a position[2] - consciously counting stones and spaces to estimate the score or the general strength of a position. This is in contrast to their usual mode of thinking, which is much less quantitative.

Go is often taught using proverbs[3], which are essentially heuristics. Phrases like "An eye of six points in a rectangle is alive" or "On the second line eight stones live but six stones die" are commonplace. They are very useful in developing the intuition of a player.

As I understand it, the search space is largely irrelevant to human players because they rarely perform anything that approximates a tree search. Playing out imaginary moves ("reading", in the go vernacular) is generally used sparingly in difficult positions or to confirm a decision arrived at by intuition.

Go is the board game that most closely maps to the human side of Moravec's paradox[4], because calculation has such low value. AlphaGo uses some very clever algorithms to minimise the search space, but it also relies on 4-5 orders of magnitude more computer power than Deep Blue.

  [1] https://en.wikipedia.org/wiki/Shape_(Go)
  [2] http://senseis.xmp.net/?Counting
  [3] https://en.wikipedia.org/wiki/Go_proverb
  [4] https://en.wikipedia.org/wiki/Moravec%27s_paradox
ekianjo
> If we let x be the maximum complexity of a task at which AI performs as well as a human, then I would argue that x has been growing at an accelerating pace over the past few decades.

At ONE task, yes. But humans are average at many things but excel at being able to adapt to many different tasks, all the time. Typical AIs (as we know them now) cannot ever hope to replicate that.

celticninja
This seems to have been linked to a lot recently but I feel it is relevant to the discussion on technology advances pertaining to AI.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

skj
"and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function."

To be clear, the above refers to specific concepts in Reinforcement Learning.

A policy is a function from state (in Go, where all the stones are) to action (where to place the next stone). I agree that it is unlikely to have an effective policy function. At least one that is calculated efficiently (no tree search)... otherwise its not what a Reinforcement Learning researcher typically calls a policy function.

A value function is is a function from state to numerical "goodness", and is more or less one step removed from a policy function: you can choose the action that takes you to the state with the highest value. It has the same representational problems found there.

Mar 15, 2016 · 7 points, 0 comments · submitted by kelseydh
Mar 15, 2016 · 3 points, 0 comments · submitted by pkrumins
Mar 15, 2016 · 5 points, 0 comments · submitted by kbwt
Mar 14, 2016 · 6 points, 0 comments · submitted by dsr12
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.