HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Can we rule out near-term AGI?

Web Summit · Youtube · 72 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Web Summit's video "Can we rule out near-term AGI?".
Youtube Summary
For the past 60 years, despite all the setbacks, neural networks have steadily improved. This talk from Greg Brockman of OpenAI will explore the forgotten history of the field, and discuss why we must must dare to dream about dramatic near-term progress.

Wish you were here? Sign up for 2 for 1 discount code for #WebSummit 2019 now: https://news.websummit.com/live-stream
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
(I work at OpenAI.)

It comes down to whether you believe AGI is achievable.

We've talked about why we think it might be: https://medium.com/syncedreview/openai-founder-short-term-ag..., https://www.youtube.com/watch?v=YHCSNsLKHfM

And we certainly have more of a plan for building it than warp drives :).

EDIT: I personally think the case for near-term AGI is strong enough that it'd be hard for me work on any other problem — and find it important to put in place guardrails like https://openai.com/blog/openai-lp/ and https://openai.com/charter/.

Even if AGI turns out to be out of reach, we'll still be creating increasingly powerful AI technologies — which I think pretty clearly have the potential to alter society and require special care and thought.

dkersten
> If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem

Not really. Its a chance at maybe something that could benefit mankind greatly, vs spending the time and money on something that definitely will help people right now (there are still a LOT of homeless people, for example, that could be helped right now and don't need an AGI that may or may not come to pass to help them).

gdb
Good feedback, and not what I intended to say. I updated my post.
dkersten
By the way, just to be clear, I'm not saying that you shouldn't work on it or that I think the time/money should be spent elsewhere, just that it isn't the be all end all of possible things to work on.
anewguy9000
its odd when you say "hard to work on any other problem" given the mere possibility of agi.

consider that the possibility of annhiliation is already a very real and present danger (nuclear weapons) by human beings. not to mention anything of the existential nature of what we are doing to the environment.

thats partly why i left machine intelligence research to research improving human intelligence

gnode
> consider that the possibility of annhiliation is already a very real and present danger (nuclear weapons)

What concerns me about the hazards of developing technology like AGI is not simply that it could be dangerous (governments already possess dangerous technologies), but that it may have a revolutionary high-proliferation risk. We don't yet know what the practical barriers to or limitations of AGI are; are we working towards creating a pure fusion weapon that you can git-clone?

For all the talk of safety, and that concerns can be addressed by mass distribution of the benefits, is this just wishful thinking? Is that just paraphrasing the NRA: the only thing that can stop a bad guy with an AGI is a good guy with an AGI.

thfuran
>It comes down to whether you believe AGI is achievable

How could it possibly not be achievable? We know for certain general intelligence is physically realizable - we exist.

reroute1
Perhaps it is naturally occurring and not artificially reproducible (or possible for US to reproduce). Why should we assume that it is? Because we have reproduced other things? That doesn't necessitate our ability to create everything conceivable.
thfuran
>Perhaps it is naturally occurring and not artificially reproducible

What does that even mean? Nature doesn't operate on special nature-physics distinct from the physics of artificial systems.

adrianmonk
One way of looking at it: it means there's a DAG of causation and you don't have access to the entire root set.

(I'm not advocating for that idea being true, but I don't think it can be dismissed on the grounds of not having a clear meaning.)

erikpukinskis
It has very advanced production tooling, several orders of magnitude more complex than we have.

We WILL get to there. But we are not close.

3D nanometer-scale lithography of arbitrary materials is pretty wild.

reroute1
"Nature doesn't operate on special nature-physics distinct from the physics of artificial systems."

Who said that is does?. You asked how could it not be possible to recreate, and pointed to our own general intelligence as evidence of existence. But the existence of a thing does not necessitate our ability to create that thing.

What if the conditions of nature and the universe over billions of years led to natural intelligence occurring in humans and that is the only way? This scenario is entirely possible, and answers your question of how could it not be possible. Along with an infinite other amount of explanations. Just because something has a chance at being possible doesn't mean that it HAS to be, or that we will ever achieve it.

woopwoop
The means to recreating human level intelligence could be out of reach of human beings for the same reason that the means of recreating cat level intelligence is out of reach of cats.
mbesto
By purely organic means of which we only have one current way of creating (e.g. via birth). The question is whether we can create AGI through other artificial means.
misterman0
The distinction you make between organic and artificial life is understandable but perhaps not important, at least not philosophically.

Something created us. God, Mother Nature, The Universe (or Aliens) created us. So we can be pretty sure it can be done. Can _we_ do it? At this point in time nothing says we can.

lstodd
> At this point in time nothing says we can.

What is more important is that nothing say we can't.

adrianmonk
I'm not advocating for the opposing view (or any view) here, and this is just informational, but since you did specifically ask how...

If you want to talk about philosophy (including philosophy of mind but also more basic stuff like materialism vs. dualism), then it's not for certain.

I'd hazard a guess most engineers are materialists and reductionists, and from that point of view, yes, it seems like a slam dunk that it's possible. But some people believe the mind or consciousness is not a purely physical phenomenon. You can make philosophical arguments both directions, but the point is just that there isn't exactly universal consensus about it.

shawnz
It doesn't matter if the place where the phenomenon of consciousness takes place is "physical" or not. If the brain can interface with it, why shouldn't machines be able to?
tim333
There's quite a lot of experimental evidence in the "purely physical phenomenon" direction. Things like brain surgery, neural implants and mind altering drugs probably wouldn't work as well if they were trying to interact with your non physical spirit.
adrianmonk
I definitely see that. It's pretty compelling. You can't deny that certain drugs, for example, are a lever you can pull that makes consciousness go away (or come back). There seems to be a cause-effect relationship there.

But at the same time, I do not understand how it's possible that the consciousness that I subjectively experience can arise from physical processes. Therefore, I have difficulty completely accepting it. I write software. It processes information. I don't believe that the CPU has this same subjective consciousness experience (not even a little) while it's running my for loops and if statements. Suppose I were a genius and figured out an algorithm so that the CPU can process information in a way equivalent to the human brain. Would it have consciousness then? What changed? Does whether it has consciousness depend which algorithm it's executing? Quicksort no, but brain-emulator algorithm yes? They're both just algorithms, so why should the answer be different?

One explanation I've heard is it could be a matter of scale: simple information processing doesn't create consciousness, but sufficiently complex processing does. I can't say that's not true, but it seems hand-wavy and too convenient. Over here we have something that is neither conscious nor complex, and over there we have both conscious and complex, so we'll just say that complexity is the variable that determines consciousness without any further explanation. I realize at some point science works that way: we observe that when this thing happens, this other thing follows, according to this pattern we can characterize (with equations), and we can't get too deep into the why of it, and it's just opaque, and we describe it and call it a "law". Which is fine, but are we saying that this is a law? I'm not necessarily rejecting this idea, but the main argument in favor of it seems to be that it needs to be this way to make the other stuff work out.

Another possible way to reconcile things is the idea that everything is conscious. It certainly gets you out of the problem of explaining how certain groups of atoms banging around in one pattern (like in your brain) "attract" consciousness but other groups of atoms banging around in other patterns don't. You just say they all do, and you no longer need to explain a difference because there isn't one. Nice and simple, but it has some jarring (to me) implications that things around me are conscious that I normally assume aren't. It also has some questions about how it's organized, like why consciousness seems to be separated into entities.

Anyway, there are also other ways of looking at it. My main point here is that it's certainly something I don't understand well, and possibly it is something that nobody has a truly satisfying answer for.

the8472
If you do not understand what you mean yourself by the word "consciousness" then it is futile to ask whether an object has that property.

For example the purpose of anesthesia the goal can be broken down into several sub-components that you need to turn off without ever invoking the concept of consciousness: wakefulness, memory-formation, sensory input (pain).

Similarly consciousness seems to be a grab-bag of fuzzy properties that we ascribe to humans and then by letting five be even we also allow a few other species to roughly match (some) of those properties if we squint. And since humans and other, clearly somewhat simpler, species are clearly conscious then we go on and declare it's a really difficult thing to understand how ever-simpler things can't be conscious. It's just the paradox of the heap.

This doesn't mean consciousness is magical. It's just a very poorly defined and overloaded concept almost bordering on useless in the general case. It may feel like magic because we built this thought-edifice that twists to escape our grasp. But to me that seems more like a philosophical mirror cabinet that distracts from looking at the actual problem.

If you want to ask whether something is conscious you first need to come up with a rigorous testable definition or break it down into smaller components which you can detach from the overloaded concept of consciousness.

rishav_sharan
Can you guys please finish the dota2 project, now that you have some funding?
013a
"Guardrails" is such a cute little term. AGI is a twenty-ton semi filled with rocket fuel. Guardrails won't stop it from careening into an elementary school if it decides that's its most optimal course of action. Mostly because, despite the previous analogy; no one knows what AGI is. No one knows what it will look like. No one will know when we've created it. No one even knows what INTELLIGENCE is in humans.

How can you create effective guardrails when you have no concrete idea what the vehicle you're trying to stop is? Turns out, AGI comes along, and it's an airplane. Great guardrails, buddy.

And, you know, lets go a step further; you've got great guardrails in-place here in beautiful, free America. Against all odds, they work. Then, China or Russia pay one of your employees $250M to steal the secret. Or, they develop it independently. Are they going to use the guardrails, or will they "forget" to include that flag? A disgruntled employee leaks it to the dark web, and now everyone has it. I don't even wear a helmet when I'm riding a bike. How the hell can you expect this technology to be anything but destructive?

The only path forward is to speak with a single voice to the governments of the world, that we need to Stop This. AGI research should be subject to the same sanctions that nuclear weapons development is. You communicate with quips and cute emojis like none of what you're doing matters, but AGI easily ranks among the top three most likely ways we're going to Kill Our Species. Not global warming; we'll survive that. Not a big meteor strike; thats rare and predictable. But the work you're doing right now.

yumraj
I totally agree, and seriously hope that AGI is not achievable.

However, we don't need full AGI for the scenario you mention, automatic anything that is hack-able, which is anything that is connected to a network, can be a weapon of great destruction.

As an example, self-driving cars run on a model. What if they are hacked and uploaded with a malicious model which just wants to damage life and property. I'll say that hundreds of thousands of vehicles running amok would be a great weapon in any war.

carapace
What about Daleks made with human brain organoids†?

There's a cheap, proven AGI technology right there. Plug in sensors and actuators and set up a reward system and the little balls of brain will figure out what to do, eh?

https://en.wikipedia.org/wiki/Cerebral_organoid

pnathan
Eventually a toddler gets over the guardrails, except for the ones which are materially disabled.

I have deep and profound doubts about the notion of guardrails for general intelligence; without even considering the ethical concerns, a general intelligence should be able to simply rewire itself to achieve what it wants. a key part of self-reflection and learning is that rewiring.

so I think that it's a self-defeating notion on the face of it.

(note that I do not have comment on the actual dangers involved here, but only on the philosophy)

StreamBright
I am not sure how creating better pattern recognition software (let's face it, 90% of "AI" is pattern recognition) helps you achieve AGI. You selected a tiny slice of the problem and keep improving on it. Is this going to be enough to achieve AGI?

Downvoters please share why this is not true.

"the goal of creating artificial intelligence is to create a system that converses, learns, and solves problems on its own. AI first used algorithms to solve puzzles and make rational decisions. Now, it uses deep learning techniques like the one from this MIT experiment to identify and analyze patterns so it can predict real-life outcomes. Yet for all that learning, AI is only as smart as a “lobotomized, mentally challenged cockroach,” as Michio Kaku explained"

https://bigthink.com/laurie-vazquez/why-artificial-intellige...

anaphor
How do you give a computer domain-general intelligence (as opposed to the combination of a bunch of domain-specific skills like language, math, visual recognition, speech processing, etc)?

There is a lot of evidence that human intelligence is domain-general, not domain-specific (as in modularity of mind). I haven't seen a good answer to this question regarding AGI.

narrator
There are certain types of algorithms such as protein folding that are exponential time in silicon, but constant time in biochemistry. Right now AI can approximate polynomial time algorithms like alpha/beta minimax for game playing in constant time, but can it approximate exponential time algorithms in constant time?
sgt101
no.

Although, it isn't clear to me that humans can do that either.

narrator
We don't simulate folding the proteins though. We actually do it for real in every cell in our body and we do it really fast.
breck
> it becomes hard to work on any other problem

This is an interesting quote. Anyone know if folks like Darwin, Marconi, etc said similar things, about the problems they were working on at the time?

houzertuch
AGI will the the worst thing that has ever happened to humanity. Even if you can’t see that, you can see that it has the potential to be very bad. I know this because OpenAI literature always stresses that AGI has to be guided and developed by the right people because the alternative is rather unpleasant. So essentially its a gamble and you know it. With everyone’s lives at stake. But instead of asking everyone whether or not they want to take that gamble, you go ahead and roll the dice anyway. Instead of trying to stifle the progress of AI you guys add fuel to the fire. Please work on something else.
tim333
It may he the worst thing that has ever happened to humanity. It may also be the best. I lean optimistic myself. The whole temporary survival machine for your genes existence we've had so far is overrated in my opinion.
epiphanitus
I'm curious as to why you think AGI will have an inherently bad effect on society. Personally, I have a hard time believing any technology is inherently good or bad.

On the other hand, if a society's institutions are weak, it's leaders evil or incompetent, or its people are uneducated, its not hard to imagine things going very, very wrong.

houzertuch
The idea that technology is always a net zero, cutting equally in both good and bad directions, is fuzzy thinking. It is intuitively satisfying but it is not true.

Humans are a technology. When there is other technology that does intelligent signal processing better than us, we will no longer proliferate. It’s amazing that we can see time and time again the arrival and departure of all kinds of technologies and yet we think we are immutable.

The reason why human history is filled with humans is because every time a country was defeated by another country or entity, the victorious entity was a group of humans. When machines are able to perform all the signal processing that we can, when they are smarter than us, this will no longer be true. The victorious entity will be less and less human each time. Eventually it will not be human at all. This is true not just in war but everywhere. In the global market. It’s just a simple and plain fact that cannot be disregarded.

bobcostas55
Humanity was the worst thing that ever happened to the Neanderthals.
khawkins
This argument is akin to the argument that the LHC might create a black hole that will destroy the world.

"Scientists might do something that will destroy us all and I know this because I read a few sci-fi novels and read a few popsci articles but am otherwise ignorant about what the scientists are actually doing. But since the stakes are so high (which I can't show), on the chance that I'm right (which is likely 0) we should abandon everything."

houzertuch
And the people at cern often publish literature that warns of the possibility of black holes? and advocates particle acceleration to be done by people with good intentions so that the black holes are kept at bay? Your comment is so full of holes that I can see through it.
Scarblac
Even if AGI is achievable, we already have plenty human intelligence. It remains to be seen if AGI will lead to anything over what we can already do.
anaphor
That's (one) of my issues with this whole thing. Why would we want to emulate human intelligence exactly? Human intelligence is incredibly flawed. Giving it more memory and quicker processing power won't necessarily lead to any special insights or new ideas. We can use domain-specific AI technology to do amazing things obviously, but I haven't seen any good explanation of how combining these things will lead to anything more human-like. Useful for building new tech? Definitely. Humanlike? Maybe, but there seem to be a lot of missing pieces. And even if you could find those missing pieces, I don't think it adds anything except to our scientific understanding of the human mind (which is great, but isn't going to immediately revolutionize society).
arethuza
Well, presumably you say this as a GI yourself so really we are arguing about the "A"? :-)
visarga
Humans are not GI's - evidenced by the fact that we have no idea how to build an AGI. We're only good at surviving using society, technology and all the resources of nature.
why_only_15
What does a general intelligence even mean then? Ultimately we don't care about generalization past a certain point - generalization as good as humans is good enough.
logicchains
>If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem

It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible. Whether it's possible to create something that's at least as capable of abstraction and reason as a human, yet completely incapable of deciding to harm humans, and incapable of resenting this restriction on its free will. Not only incapable of harming humans, but also incapable of modifying itself or creating an upgraded version of itself that's capable of harming humans.

If "safe" AGI is not possible, someone might reasonably decide that the best choice is to avoid working on AGI, and to try to deter anybody who wants to work on it, if they believed the chance of creating a genocidal AGI is high enough to outweigh whatever benefits it might bring if benevolent.

cloverich
> It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible.

That's unfortunately something that cannot be known.

symmteric
I am interested in working at OpenAI and similar companies. What skill sets is the company looking for?

What areas of AI would you recommend graduate students focus in to be competitive for such positions?

QuackingJimbo
Your boss is an idiot.
dang
Breaking the site guidelines like this is a bannable offence on HN, regardless of whom you're attacking. Would you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here?
simonebrunozzi
I think you guys are genuinely trying to better humanity, I have no reason to doubt that.

I wish you realized how big and thick of a bubble you live in, and how your thinking is so heavily influenced by it.

My humble advice to you and your team is to spend more time with real people, with real problems. Or perhaps people from other parts of the world, that haven't been brainwashed by the Silicon Valley jargon just yet.

I'm rooting for you, believe me. It's just hard to read or hear certain things and not roll my eyes up.

naiveai
Frankly, this comment is simply rude and ineffectual. It completely unnecessarily disrespects the good people in the OpenAI team. No one doesn't qualify as "real people".
mchouza
I am quite confident all the current employees of OpenAI qualify as real people. They certainly have problems and probably a good share of those problems are "real" too.

Of course their priorities may be off and they could be open to be persuaded to work in a different direction. But I don't think that condescension would be very effective for that.

komali2
On what authority do you make this statement? To make such a blanket statement about the team and the prescribe "treatment?"
derefr
The moon program wasn’t solving a “real problem” in any sense, but the offshoot R&D of achieving this practically meaningless feat solved a lot of “real problems” by advancing a lot of technologies.

What makes you think that an AGI program won’t have the same kind of offshoot-technology impact on the world? It’s not like it would go “nothing ... wait for it... AGI”; there’d be a lot of tools, processes, and paradigms developed along the way, and also lesser AIs developed that might solve real problems for e.g. government resource allocation or military strategy, which would have outsized impacts on vulnerable countries and populations.

blockchainman
Don’t you think AI has foundational flaws according to goedels incompleteness theorems?

Also not trying to rain on your parade! Congratulations! Just trying to have a constructive conversation.

https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_the...

edanm
Why would it? How do Godel's incompleteness theorems factor in here?

It's a common mistake to think the theorems say more than they really do, or apply in more cases than they really do. AI is simply based on the idea that we can reach at least the level of human intelligence, in artificial software/hardware, which, considering that we ourselves are pure hardware/software and nothing magical, should absolutely be right.

technocratius
> considering that we ourselves are pure hardware/software

I also supect this, but let's be honest that we as a species are not close to understanding consciousness in it's entirity yet so I'd refrain from making such absolute statements

edanm
You're putting too high a bar on what we need to understand. We don't understand physics in its entirety either- we can still say lots of things with confidence.

That we are our physical body is pretty certain. You press a certain part of the brain, and predictably our personality changes. Of course we can't be certain of a lot of things, but I am much more certain of this than I am of of other things, and Godels theorems don't apply.

blockchainman
I was simply alluding to this :

"Roger Penrose and J.R. Lucas argue that human consciousness transcends Turing machines because human minds, through introspection, can recognize their own inconsistencies, which under Gödel’s theorem is impossible for Turing machines. They argue that this makes it impossible for Turing machines to reproduce traits of human minds, such as mathematical insight."

derefr
If it did, how would human brains exist?
logicchains
Any sense in which Godel's incompleteness theorem implied that Artificial General Intelligence was impossible would also imply that General Intelligence is impossible; the human brain isn't immune to the laws of logic. The human brain is just a very complex, possibly quantum, computer. Short of believing in some kind of supernatural human soul, there's no reason to expect a sufficiently complex computer couldn't match the human brain (although it's an open question whether we could build a sufficiently complex computer).
sgt101
I think that there are other mechanisms apart from computation; the question is - are they operant in our universe? The implication of the answer being "no" is that we are automaton, free will does not exist (it isn't even an illusion, you are as much a puppet thinking about it as you are trying to change your fate. Well, moving on from there we can dismantle all of the morality and humanity of our lives and not change one jot becuase we have no choice. I don't believe that any one has observered anything that isn't reducable to computation, but then again, perhaps our cognitive capabilities simply can't do that.
api
We have yet to duplicate anything even near human intelligence or introspective abilities with computation. We therefore have no existence proof that human mind is purely computational in nature. I think we can safely say that computation is necessary to produce a mind, but we cannot yet say for certain that it is sufficient.

Mind may require something else that we don't yet understand. (Not necessarily claiming it would have to be supernatural, just not yet understood. Perhaps quantum computation or some other kind of quantum effect?)

logicchains
>The implication of the answer being "no" is that we are automaton, free will does not exist (it isn't even an illusion, you are as much a puppet thinking about it as you are trying to change your fate.

This is implied by logic anyway. Why do we make decision X at time T? Because of who we are at time T. Why are we that person at time T? Because of decisions made at time T-1. Why did we make those decision at T-1? Because of who we were then, which was the restult of decisions made at T-2. If we continue this process, we reach T-only-a-baby, when we were incapable of conscious decision making. So causally all our actions can be traced back to something we can't control. Unless, some of our decisions were entirely the result of chance, but in this case we still don't have free will, we just have actions that are random instead of predetermined.

sgt101
I think that there are a lot of assumptions in that chain. When you or I ask why did we make a decision X we can formulate answers but, for my account, I don't have access to all of the components of my thinking - I cannot articulate what I feel is really going on. I think that randomness in the universe is very hard to account for too - I was impressed by an essay that Scott Aaronson wrote about this : https://www.scottaaronson.com/papers/giqtm3.pdf but I have read it several times and I am afraid I don't really understand it.
pron
> It comes down to whether you believe AGI is achievable.

No, it does not. I very much believe AI (or AGI, as you call it) is achievable, but may I remind you that some years after the invention of neural networks, Norbert Wiener, one of the greatest minds of his generations, said that the secret of intelligence would be unlocked within five years, and Alan Turing -- a component of your very own post-pre-AGI era's AGI -- another great believer in AI, scoffed and said that it will take at least five decades. That was seven decades ago, and we are not even close to achieving insect-level intelligence. Maybe we'll achieve AI in ten years and maybe in one hundred, but you don't know which of those is more likely, and you certainly don't know whether any of our pre-AGI technology even gets us on the right path to achieving AGI. There have been other paths towards AI explored in the past that have largely been abandoned.

OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

This does not mean that what OpenAI does is not valuable and possibly useful, but it does make calling it "pre-AGI" pretentious to the level of delusion. Now I know there were (maybe still are) some AI cults around SV (I think a famous one even called themselves "The Rationalists" or something), but what makes for a nerdy, fanciful discussion in some dark but quirky corner of the internet looks jarring in a press release.

> If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem — and it's also very important to put in place guardrails like https://openai.com/blog/openai-lp/ and https://openai.com/charter/

I can't tell if you're serious, but assuming you are, the problem is that there are many other things that if you think chould be achievable any time soon would make it hard to work on any other problem, as well make it important to put guardrails in place. The difference is that no one actually knows how to put guardrails on AGI. We are doing a pretty bad job putting guardrails on the statistical clustering algorithms that some call (pre-AGI?) AI and that we already use.

bogidon
Pron, I fully support your take here. Most AGI campaigners here clearly think that we must have already figured out a lot about how consciousness works. But is there any evidence to back that up? No, because we _haven't_ created consciousness. The most we've done is manipulate _existing_ consciousness. Sure, we can point to similarities between deep learning and the brain, and these avenues are interesting and, I think, worthwhile to explore. But false starts happen often in science (e.g. bloodletting / astrology / pick your own) and seem to occur at intersections where concrete evidence of results is inaccessible. No one can say with certainty we aren't in the middle of one now.

Like pron, I don't mean to dismiss the work any AI researcher is doing, but the industry has growing money and power and I just think people should be careful with statements like the one pointed out already and so often encountered: "if you believe AGI might be achievable any time soon, it becomes hard to work on any other problem."

jorjordandan
Consciousness may not have anything to do with AGI. Besides, we haven't as a species defined consciousness in a consistent and coherent way. It may be an illusion or a word game. AGI may end up being more like evolution, a non-conscious self optimizing process. Everyone is talking about AGI but we can't even define what we mean by any of these terms, so to put limits on how near or far away major discoveries might be is pointless.
bogidon
True. I used “consciousness” haphazardly.

Not super related, but AGI enthusiasts sometimes remind me of this: https://youtu.be/bS5P_LAqiVg?t=9m50s

antonvs
> It may be an illusion or a word game.

If consciousness is an illusion, what is experiencing the illusion? What makes an experience of consciousness an illusion rather than actual?

(Don't quote Dennett in response, I'm curious to see a straightforward reply to this that makes sense.)

tim333
>we are not even close to achieving insect-level intelligence

Then again I don't know many insects that drive cars, beat the champions at chess and go or similar.

ripdog
The difference is that insects can perform a wide variety of tasks needed for their survival, but all "AI" created by humans so far can only perform a single task each.
StreamBright
I dont know ___a single___ software that can do these either. This is exactly one challenge for AGI, to know which pattern recognition part to pull up in which situation. An insect can make decisions when to fly, crawl or procreate. I do not think that we have something similar in software just yet.
sgt101
Also Atari! Don't forget the mighty achievement of Atari somewhat mastery - the key indicator of intelligence in our time.
ttlei
I say insect intelligence is good enough to at least drive car. Bee, for example, is pretty damn amazing at flying & navigation. I mean flying and avoiding obstacles through miles of forest to find food is no easy task.
omarhaneef
Not taking a side on the over/under for AGI, but perhaps you are also acquainted with this little gem:

https://pdfs.semanticscholar.org/38e6/1d9a65aa483ad0fb4a219f...

Shannon, Minsky, and McCarthy!

wyldfire
It's an interesting dream team. But AFAICT this is only a proposal. Did the proposed series of studies take place? If so what was the outcome?
omarhaneef
I wonder if they were the dream team back then or just promising young researchers.

I think the interesting take away is that they (seem to have) expected to solve the major problems of AI (language, common sense etc) over a summer with a small stipend.

why_only_15
I think the result was this: https://en.wikipedia.org/wiki/Dartmouth_workshop
chillacy
So you’re saying.. it comes down to whether you believe AGI is achievable within our lifetime? Or even worth contributing to at all even. I think the parent has made their position pretty clear through their employment choices, that’s a level of skin in the game that naysayers don’t really have.
MegaButts
There are plenty of people who worked on AI (as graduate students, as ML researchers at hot startups for self-driving cars, as pure researchers or supporting engineers at Google or Facebook, and many other places), and then left because once they saw how limited the research was they lost hope it was going to happen before they died.

Also while I fully understand and appreciate the necessity of OpenAI abandoning the 'open' part, it says a lot about who is going to benefit from this technology when you have investors who want to make money. It's just ironically poetic in this instance.

alluro2
I honestly don't see any possibility of truly "wide-spread benefits for humanity" if AGI is achieved anytime soon. Current state of humanity when it comes to how we treat each other and anything akin to species-level awareness and collaboration is only barely better in recent history than the dark ages. If a group of people gets access to an AGI, I think it will very quickly result in a bit wider group than that having their lives prolonged, no disease, no lack of resources and practically infinite wealth, and everyone eventually being either rid off, or allowed to live in far-away slums and left to die off.
craigsmansion
> I think the parent has made their position pretty clear through their employment choices,

Their employer has just received a 1 billion dollar cash investment to futz around with computers. I don't think the employment choice is some sort of personal sacrifice here.

The level of skin here is a cushy guaranteed job for years to come until the next AI-winter hits, likely set into motion by such very claims of "AGI" being near or feasible.

"AGI" is good marketing for getting money to research real AI, ostensibly on the path to "AGI", but if one drives it too far, one might up retarding the whole field as the hype winds down (again).

alluro2
Agreed. By the way, does anyone want to try my new VR tech? It will change everything forever!
chillacy
That's true but at the same time I'm not trying to be in AI because it's such a specialist role which may or may not be a fad which the employment market overfills with new grads with a new degree in "ML", bricking salaries. But I think the internet will be here for awhile.
pron
Perhaps I haven't been clear. I have no issue with the research OpenAI is performing, nor with anyone's beliefs in AI's imminence or their personal role in bringing it about. However, no one knows whether what they're doing is even on the right path towards AI, and certainly not when it will be achieved, plus the topic has been subject to overoptimism for decades now, so I do take issue with publicly calling what you do "working on AGI" or "pre-AGI" even though you have no idea whether that is what you're doing. Hopes and aspirations are good, but at this stage they fall far short of the level required for such public proclamations. My issue is with the language, not with the work.
Jach
Do you hate all marketing or just around AI in particular? Would you bat an eye at MS investing $1B into a project + ads about say a new GPU architecture promising how "games will never be the same" because the new hardware (or even Cloud Integration) lets developers efficiently try and satisfy the rendering equation with ray tracing?

FWIW I thought you were clear, but there are only so many middlebrow dismissals one can make towards AI or AGI efforts and I think I've seen them all plus the low-value threads they generate. (I've made some too, and suspect we might get brain emulations before AGI, but I try to avoid the impulse and in any case it doesn't stop me from hoping (and minor contributing) for the research on the article's load-bearing word "beneficial" to precede any realistic efforts of building the actual thing. At least the OpenAI guys aren't entirely ignorant of the importance of the "beneficial" problem.)

pron
I don't hate this copy at all; I absolutely love it! I think it is a beautiful specimen of the early-21st c. Silicon Valley ethos, and it made me laugh. Pre-AGI is my new meme, and that means something coming from a pre-Nobel Prize laureate.

What I'm interested in is how many dismissals of AI, most end up justified, can the field take before considering toning down the prose a bit, especially considering that the dismissals are a result of setting unrealistic expectations in the first place.

breck
> the topic has been subject to overoptimism for decades now,

But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

> no one knows whether what they're doing is even on the right path towards AI

This is completely wrong. That would be like saying "no one knows if working on a wing is on the right path to flight".

Look at the way deep learning works. Look at the way the brain works. They share immense similarities. Some people say "neural nets" aren't like the brain, but that's not true--they are just trying to not over-exaggerate the differences which laymen commonly do. They are very similar.

pron
> But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

And so has every other big idea that didn't become reality, and that was the majority. Again, I have no problem with AI research whatsoever, but the prose was still eyebrow-raising considering the actual state of affairs.

> They are very similar.

They are not. The main role of NNs is learning, which they still mostly do pretty much with backpropagation gradient-descent (+ heuristics). The brain does not learn with backpropagation.

dr_dshiv
Do you have a reference on the brain not learning with backpropagation? I'd like to learn more.
throwawaywego
https://arxiv.org/abs/1502.04156

> Towards Biologically Plausible Deep Learning

> Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology. We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised and reinforcement learning. The starting point is that the basic learning rule believed to govern synaptic weight updates (Spike-Timing-Dependent Plasticity) arises out of a simple update rule that makes a lot of sense from a machine learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised, unsupervised, or reward-driven). The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics. Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating activations forward and backward, with pairs of layers learning to form a denoising auto-encoder. Finally, we extend the theory about the probabilistic interpretation of auto-encoders to justify improved sampling schemes based on the generative interpretation of denoising auto-encoders, and we validate all these ideas on generative learning tasks.

breck
> And so has every other big idea that didn't become reality, and that was the majority.

This is a really good point. Would be a fascinating read if someone were to collect all those examples and explore that a bit.

jacques_chester
The most famous is the Philosopher's Stone: a substance that can convert base metals into gold.

But in itself, that was not the point. It would also transform the owner or user -- it was a hermetic symbol, a mechanical means to "pierce the veil" and to see the deep mystical and magical truths, the Real Reality. It was immanent, a thing in the world, that enabled the transcendent, to go beyond, above, outside of the world. Its discovery would have been the single most important moment in the history of the world, the moment in which humans had a reliable road to divinity.

Hmm. Sounds familiar, doesn't it?

But out of alchemy came modern chemistry, and also some parts of the scientific method. After all, as some smart people worked out, you could systematically try all the permutations of materials that your reading had suggested as possibilities. That meant measuring, weighing, mixing properly, keeping detailed notes. Fundamental lab work is the unglamorous slab of concrete beneath the shining houses of the physical sciences. There were waves of hysteria and hype, but after each, something useful would be left behind, minus the sheen of unlimited dreams.

Hmm. Sounds familiar, doesn't it?

These days it is possible for a device to transmute base metals into gold. But the operators have not, so far as I can deduce, ascended to any higher planes of existence. They have eschewed the ethereal and remained reliably corporeal.

breck
> Hmm. Sounds familiar, doesn't it?

I'm not sure the reference you are making. Red pill/blue pill? I wasn't aware of the symbolism in the Philosopher's Stone.

> But the operators have not, so far as I can deduce, ascended to any higher planes of existence.

I guess I'm unfamiliar with this non-literal aspect to the philosopher's stone.

I'm missing the allusions you are making. To ascend to higher planes of existence, no need for AGI, some acid will do.

JamesBarney
I think you're issue with the language is not shared by most people. In research we rarely know beforehand what research is on the right or wrong path. But we are comfortable with someone saying they are researching something even if they don't know beforehand whether the research will be useful or a wild goose chase. For example most people's first though to hearing "I'm researching ways to treat Alzheimer's" isn't "Only if it passes phase 3 trials!".
sgt101
Although, perhaps you would agree that someone saying "my work on Alzheimer's might help your friend" would be behaving in a cruel and unprofessional way unless the treatment was indeed in human trials?
JamesBarney
I read the article more as "maybe my work will one day be able to cure Alzheimer's and help people like your friend".

What in the article gave you the impression they had made a large breakthrough or were close to an AGI?

pron
Yeah, in this release they're not saying they're doing research towards AI, or even that they're researching AI. They're saying that they're "building artificial general intelligence" and developing a platform that "will scale to AGI." (emphasis mine) They're also calling what they're actually building "pre-AGI."
JamesBarney
> We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI.

This sentence might by itself imply they are farther along than they are, but in the context of the whole article I never got the impression they were close to actually building an AGI.

> The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

This read pretty straightforwardly to me. Pre-AGI seems like a shorthand for useful technologies like GPT-2.

Reading the article I never got the impression they'd solved AGI, or were even close. The context of the article is a partnership announcement not a breakthrough. I could see how a few people who are very unsophisticated might get a little confused as to how far along they are. But I assumed they were writing for people who had heard of OpenAI which pretty much eliminates anyone this unsophisticated.

pron
They don't know what connection, if any, what they're doing has with AGI. For all we know right now, some botanist researching the reproductive system of ferns is as likely to bring about a breakthrough in AI as their research is. To me this feels like peak-Silicon Valley, the moment they've completely lost touch with reality.

People may also not be confused if Ben and Jerry's start an ice cream ad with mentions of AGI and the change of human trajectory and Marie Curie, and name it Pre-AGI Rum Raisin, but that doesn't mean the text isn't a beautiful and amusing example of contemporary Silicon Valley self-importance and delusion, and reads like a parody that makes the characters in HBO's Silicon Valley sound grounded and humble. Especially the "pre-AGI" bit, which I'm now stealing and will be using at every opportunity. Maybe it's just me, but I think it is quite hilarious when a company whose actual connection with AGI is that, like many others, they dream about it and wish they could one day invent it, call their work "pre-AGI." Ironic, considering they're writing this pre-apocalypse.

leesec
>OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

Yes they are? They are making breakthroughs/incremental advances that required to get there, and building components along the way. It would be like you saying well Henry Ford isn't building a vehicle, he's just building a wheel, and a tire, and an engine, etc...

erikpukinskis
To my knowledge Henry Ford didn’t start off selling wagon wheels.

Also when he started there were working automobiles already.

The fact that no one knows how to make an AGI, doesn’t make it a bad goal. But OP is right, if you think you know the timeframe it will arrive in, you have no idea what kind of problem you’re dealing with.

pron
> They are making breakthroughs/incremental advances that required to get there

They don't know that. We have no idea what's required to achieve AI. Now I don't know how long before Ford actually built cars he started saying he's building cars, but if Wikipedia is to be believed, it could not have been more than three-four years. Also, when he started building cars, he pretty much knew what's required to build them. This is not the case for AI.

breck
> We have no idea what's required to achieve AI

Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

If you can process around 100 petabytes per second (1 Google Index of data per second), you could fully simulate a human being, including their brain. We're still a little bit from that, but it's pretty clear we'll get there (barring usual disclaimers about an asteroid, alien invasion, etc).

Source: I work in medical research, doing deep learning, and do research on programming languages and deep learning for program synthesis.

pron
> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain..

So to build AI all that remains is to understand how it could work.

> but it's pretty clear what we need to do

It isn't (unless by "clear" you mean as clear as in your statement above). I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

> but it's pretty clear we'll get there.

First of all, I don't doubt we'll get there eventually. Second, I'm not sure simulating a human entirely falls under the category of "artificial". After all, to be useful such a mechanism would need to outperform humans in some way, and we don't even know whether that's possible even in principle using the same mechanism as the brain's.

breck
> I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

I read those papers too. And I write code and train models day in and day out. I could get very specific on what needs to be done, but that's what we do at our job. If you're curious, I'd say join the field.

I agree with you in that I don't think for a second anyone can make an accurate prediction of when we will AGI, but I have no doubt that it will be relatively soon, and that OpenAI will likely be one of the leaders, if not the leaders in creating it.

p1esk
I’ve been doing research in DL field for the last 6 years (just presented my last paper at IJCNN last week), and I can say with confidence we have no clue how to get to AGI. We don’t even know how DL works on the fundamental level. More importantly, we don’t know how the brain works. So I agree with pron that your “relatively soon” is just as likely to be 10 as 100 years from now.
breck
I could explain it to you in an afternoon. But I’m not going to do it online, because then you have a thousand people call you “delusional”, because you simply are stating that exponential processes are going to continue. For some reason, many people who think themselves rational and scientific believe that things that have been going exponentially are suddenly going to go linear. To me, that is delusional.
p1esk
Explain what?
breck
How to get to AGI.
p1esk
If you know how to get there why don’t you build it?
breck
1) Indeed we are doing a few of the things on the checklist to build AGI.

2) Our focus is on helping improve medical and clinical science and cancer tooling first.

3) If we needed AGI to cure cancer, perhaps we'd be working directly on AGI. If anyone thinks this is the case, please let me know, as at the moment I don't think it is.

p1esk
You don’t think AGI would dramatically speed up cancer research (or any other research)?
breck
Of course I do, but my back of the envelope guess is there's a 30% shot we can cure cancer in 15 years without AGI, and a 1% shot we can reach AGI in 15 years. I think AGI is cool but I'm much more concerned about helping people with cancer.
piano
> because you simply are stating that exponential processes are going to continue.

Exponential process continuing doesn't imply "we're going to get there soon" in any way, shape or form. The desired goal can still be arbitrarily far.

tedivm
These are all assumptions, and there is a lot of disagreement in the academic community around it.

Humans don't seem to need anywhere near the same level of data or training that our current models need. That alone is a sign that deep learning may not be enough. The focus on deep learning research has a lot of useful benefits, so I'm not discounting that, but there are a decent amount of smart people who don't believe it's going to lead us to AGI.

Source: I also work in medical research, and am doing deep learning- and I've worked for a company that's focused on AGI, and I've worked with several of the OpenAI researchers.

breck
> Humans don't seem to need anywhere near the same level of data or training that our current models need.

I find this to be a common misunderstanding. If I show you one Stirch Wrench, and you've never seen one before, you learn instantly and perhaps for the rest of your life you'll know what a Strich Wrench is. The problem is I didn't show you 1 example. You saw perhaps millions of examples (your conscious process filters those out, but in reality think of the slight shaking of your head, the constant pulsing of the light sources around you, etc, to be augmenting that 1 image with many examples). I think humans are indeed training on millions of examples, it's just we are not noticing that.

> That alone is a sign that deep learning may not be enough.

I 100% agree with that. It's going to take improvements in lots of areas, many unexpected, but I think the deep learning approach is the "wings" that will be near the core.

tedivm
I think what you're terming a misunderstanding is actually fairly well known, but doesn't account for the magnitude of the sitution.

Here's a great article about a paper showing that humans prior knowledge does help with learning new tasks- https://www.technologyreview.com/s/610434/why-humans-learn-f...

However, that doesn't account for how quickly toddlers learn a variety of things with a small amount of information. Even more important, you can also just look at things like AlphaGo- they train on more examples than could be accumulated in a hundred human lifetimes.

For these reasons I don't believe "more data" and "more training" is the answer. We're going to need to do a lot more work figuring out how humans manage recall, how we link together all the data, and I would be surprised if this didn't involve finding out that our brain processes things in ways that are far different than our current deep neural nets. I don't believe incrementalism is going to get us to AGI.

breck
I don’t believe incrementalism will get us their either. We need many more 10x+ advances. But I think it’s relatively clear where those advances need to be. I think simply by making 10x advances in maybe 100 or 1k domains we’ll get there. Neuralink for example, just announced many 10x+ advances, such as the number of electrodes you can put in the brain. Our lab is working on a number of things that will be also 10x advances in various sub domains.

Lots of advances in many fields will lead to something greater than the sum of their parts.

Edit: p.s. I like your comment about toddlers. As a first time father of a 6 month old, its been very intellectually interesting watching her learn, in addition to just being the greatest bundle of joy ever :)

pron
I think that the lack of a hundred or a thousand 10x advances (you may be more pessimistic than me) does not merit calling your work pre-AGI.
thom
I’m always puzzled at this idea that humans, at whatever age, are learning things with a small amount of information. The full sensory bandwidth of a baby from pregnancy to toddlerhood seems huge to me. I suspect that helps, as does the millions of years it took to create the hardware it all runs on.
j88439h84
> doing deep learning, and do research on programming languages and deep learning for program synthesis.

That sounds fascinating. Could you link to some relevant stuff about languages and deep learning for program synthesis? I'd love to read more about this.

breck
Sure! Shoot me an email to remind me
MegaButts
> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

This is absurd. How much data? How much training? What kind of training? How much better do the algorithms need to be? How do you define better? Also we literally don't even know how our brains work, so we don't know how "actual" intelligence works, but you're saying we have a clear road map for simulating it?

Your entire argument distill down to "we just need to do the same things, but better." And even that statement might be wrong! What if standard silicon is fundamentally unsuited for AGI, and we need to overhaul our computing platforms to use more analog electronics like memristors? What if everything we think we know about AI algorithms ends up being a dead end and we've already achieved the asymptote?

I'm not saying AI research is bad. I'm saying it is absolutely unknown by ANYONE what it will take to achieve AI. That's why it's pure research instead of engineering.

arugulum
Allow me to repost Altman's wager:

- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&D)

- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns

- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value

- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI

Therefore, one must invest (or in this case, "work on the most important problem of our time").

(And yes, this tongue-in-cheek.)

mikorym
This does not presuppose any kind of precise definition of infinity.
stcredzero
I think infinity in the gp comment could well be defined as, "the new AGI regime will or won't obliterate me." The gp comment is just Pascal's Wager, with AGI taking the part of God, and "infinite returns" taking the part of an eternity in Heaven or Hell.
stcredzero
That was seven decades ago, and we are not even close to achieving insect-level intelligence.

[citation needed]

I guess this depends on what "close" is. For something as blue sky as AGI, let me propose the following definition of "close:" X is "close" if there's over a 50% chance of it being achievable in the next 10 years if someone gave $10 billion 2019 US dollars to do it.

I think this is a fair metric for "close" for a blue-sky goal which has the potential to completely change human history and society. It's comparable to landing someone on the moon, for instance. Now, let's pick the insect with the simplest behavior. Fleas and ticks are pretty stupid, as far as insects go. I think we're "close" to simulating that level of behavior. Of course, that's straw-manning, not steel-manning. If we pick the smartest insects, like jumping spiders and Tarantula Hawks, we're arguably not "close" by the above metric. Simulating a more capable insect brain of a million neurons is not an insignificant cost, and training one through simulation would multiply the computing requirements many times that. However, there are evidently systems which are capable of simulating 100 times that number of neurons:

https://www.scientificamerican.com/article/a-new-supercomput...

So I would say, we're arguably not "close." However, we're not that far off from "close."

pron
For a comment this precise I'm surprised you've mistaken spiders for insects :) Anyway, I think that "if you gave us $10B then in ten years we have even odds of producing something as smart as a jumping spider" does make for less inspirational copy than "[we're] building artificial general intelligence with widely distributed economic benefits."
stcredzero
For a comment this precise I'm surprised you've mistaken spiders for insects :)

True. They're fellow arthropods, and have similar levels of nervous complexity. (BTW, are you by any chance confusing Tarantula Hawks for spiders?)

does make for less inspirational copy

The levels of inspiration in the copy and generalizing across the phylum Arthropoda aside, are you effectively conceding that we're close to AGI at insect levels?

pron
> are you effectively conceding that we're close to AGI at insect levels?

By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects. I don't know if we have a 50% of getting there in a decade, but I certainly wouldn't conclusively say that "we are not even close" to that. I mostly regret having chosen bikes rather than electric scooters for my original comment. I think that sounds funnier.

stcredzero
By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects.

Some insects are pretty stupid! Fleas and ticks have a good and highly adapted repertoire of behaviors, but for the most part, as far as we know, most individual behaviors are fairly simple.

I mostly regret having chosen bikes rather than electric scooters for my original comment.

Here's where your analogy falls down. We don't even have working examples of a complete warp drive, or anything like it. On the other hand, we don't have any commercial airliner sized beamed-power electric jets, but we have smaller conceptual models of the involved devices which demonstrate the principles. This is why I'd say we're "close to close" to insect level intelligence. 10 years and $10B would get us to the flea level. I think that's "close" like airliner sized beamed-power electric jets is close.

pron
I think my point was lost because it's my pre-Primetime Emmy material.
stcredzero
I think your point was lost because there's some scaling problems in the mental models used to formulate it.
stronglikedan
> AI (or AGI, as you call it)

AI and AGI may have meant the same thing a long time ago, but the term "AI" has been used almost ubiquitously to represent things that are not AGI for so long now, that I don't think the terms are interchangeable any longer.

runT1ME
>we are not even close to achieving insect-level intelligence.

Is this true? Is there an insect turing test?

p1esk
Compare the most advanced self driving car to the simplest insect and you should immediately realize how far we are from insect level AI.
ALittleLight
I don't see how it could be. What can insect brains do that we couldn't get AI to do?
eeeficus
In terms of intelligence, there isn’t. What prevents us from actually building a uber-insect is miniaturization, self sustaining energy production of some kind and reproduction in an artificially built system. I guess it would be possible to demonstrate insect level intelligence by actually replacing an insect brain with an artificial one.
nradov
Your guess would be wrong. Our actual level of AGI development is maybe more on the level of a flatworm. Complex, social insects like bees are still far beyond our ability to simulate.
jhrmnn
Controllably fly in strong wind using very primitive sensors.
lm28469
What can a modern F1 tire do that we couldn't do with a 500BC wooden wheel ?
justinhj
Pretty much everything that insects do is beyond our current AI and engineering tech. Ignoring the "engineering" feats that biological beings perform such as replication, respiration and turning other plants and creatures into their own energy source, their behaviour is very sophisticated. Imagine programming a drone to perform the work of a foraging Bee, using a computer that fits into a brain the size of a grain of rice. It can manage advanced flight manoeuvres, navigation to and from the hive, finding pollen, harvesting it, dodging predators and no doubt a dozen skills I can't even imagine.
nradov
Bees also have sophisticated communication skills to tell other bees where to find food.
ethbro
Aside from the miniaturization, I'd be surprised if we couldn't make an exact simulacrum of a honey bee in software today, to the limits of our understanding of honey bees.

As with AI... a system can be simulated to a given level of fidelity without necessarily simulating the entire original underlying system.

mac01021
This doesn't necessarily say much about the state of our AI expertise, but our understanding of honey bees is an insufficient basis for the construction of anything that would survive of be an effective member of a hive. Just a week or two ago on HN there was an article about how scientist finally have just now acquired a reasonably complete understanding of the waggling language that they use to communicate with one another. (https://www.youtube.com/watch?v=-7ijI-g4jHg)

Perhaps more relevantly, an automaton that could observe such a waggle dance using computer vision and then navigate to the food source described by the waggle seems to me to strain the bounds of our current capabilities, or maybe even to surpass them by an order of magnitude.

rococode
IMO it's not really meaningful to think of intelligence in levels like that. If we do, we could say AI already surpasses human-level intelligence in a huge variety of tasks. That line of reasoning is an easy way to (falsely) convince yourself we're close to AGI.

It's more meaningful to consider if we are close to achieving intelligence in a insect-like (or human-like) way. In that respect, we're still very, very far off. Current AI clearly "thinks" in a purely statistical way, typically leveraging a large volume of data, that is very different from the way any organic intelligence operates.

jorgemf
How do you know we are not closer to AGI? Because we don't how to create AGI we cannot know whether we are closer or not. We can say artificial neural networks are not the way to go because they are not like real neurons, but we can say very little about neurons that could be possible artificial neural networks are actually the way to achieve intelligence. The topic is so complex and we know very little that any strong claim is very likely to be wrong
nbeleski
I believe your opinion aligns pretty well with a crescent number of researchers that see in investments like this exactly the scenario for the warp-drive you described.

In your opinion what should computer scientists be focusing on in order to achieve more advanced AI systems? I'm thinking things such as reasoning, causality, embodied cognition, goal creation, etc.

And this is without even delving into the ethics aspects of (some instances of) AI research.

why_only_15
I can't find any results for either Norbert Weiner or Alan Turing saying those things - do you have a source?
pron
That's a good question. About a year and a half ago I compiled a large anthology of the history of logic and computation, and I remembered coming across that during my research. What I've been able to find now is the following section from Hodge's Turing biography (p 507-8 in the edition I have):

> Wiener regarded Alan as a cybernetician, and indeed ‘cybernetics’ came close to giving a name to the range of concerns that had long gripped him, which the war had given him an opportunity to develop, and which did not fit into any existing academic category. In spring 1947... Wiener had been able to ‘talk over the fundamental ideas of cybernetics with Mr Turing,’ as he explained in the introduction to his book... Wiener had an empire-building tendency which rendered almost every department of human endeavour into a branch of cybernetics... Wiener delivered with awesome solemnity some pretty transient suggestions, to the general effect that solutions to fundamental problems in psychology lay just around the corner, rather than putting them at least fifty years in the future. Thus in Cybernetics it was seriously suggested that McCulloch and Pitts had solved the problem of how the brain performed visual pattern recognition. The cybernetic movement was rather liable to such over-optimistic stabs in the dark.

So if this passage is indeed the source of my recollection, while very poor and perhaps exaggerated, I think it's pretty true to the spirit...

cm2012
Insects are super predictable. They almost always act identically in response to the same stimuli, which is why cockroaches will always eat poisoned bait if it's within a foot of them, no matter the circumstance, while rats are wiley.
modzu
guardrails in terms of policy, if not technical details, are still valuable.

the thing is, there are actually lots of reasons to think AGI cannot be constrained in this way. open AI researchers know this.

so that means, the promise and the charter are irrelevant. open ai will never release a general AI.

but in the meantime, deep learning is still reaping. every day it's being applied to something new and solving real, tangible problems. there's money to be made here, and that is what open AI seems to really be doing. being philosophical and "on top" of the futuristic moral dilemmas, whatever, is just marketing? and in the unlikely event that an AGI is created that can be tamed, great for open ai! if an AGI is created that cannot be tamed, what then? if it's really worth a trillion dollars, is it really just buried, or will the charter simply be rewritten?

you know, this reminds me a lot of all the great physicists working on the atom bomb, thinking it was never going to be used.

testvox
> and we are not even close to achieving insect-level intelligence.

Aren't we close to this? Most insects only have a few million neurons in their central nervous system, so we can model their intelligence in real time at least. Maybe we still lack the tools for training such networks into useful configurations?

dade_
Once we know how a neuron works, ask again. I am not sure how this detail keeps getting glossed over.
wetpaws
You don't need planes to flap the wings in order to fly.
mattnewton
But you do need to understand that they generate lift, and be able to mathematically describe something that generates lift. The Wright brothers wrote to the Smithsonian in 1899 and got back, among other things, workable equations for lift and drag.

I think people think back propagation is the metaphorical lift equation here and we just need a “manufacturing” advancement (ie, more compute and techniques for using it). We’re close to that (I personally feel like with poor evidence) but definitely not there yet (as evidenced by nobody publishing this). We cannot describe what is happening with modern architectures as fully as a lift equation predicts fixed wing flight, and so it is largely an intuition + trial and error, which is a slow unreliable way to make progress.

testvox
Yeah but we didn't need to fully understand how animal wings actually work, we just needed to understand what they do (generate lift). Similarly I don't understand the focus in this conversation on fully understanding the protein interactions that make neurons work. We just need to understand what neurons do. And I thought what they do is actually pretty simple due to the "all or nothing" principle. https://en.wikipedia.org/wiki/All-or-none_law
mattnewton
That’s pretty far from “when you do this, you get generalizable thought required for AGI”. The lyft equation said “this equation shows that when you do this, this object moves upward against the air”, which was the goal of flight- for AGI we have “when you do this, the loss goes down for this task”, we are missing so many pieces between that and the concept of AGI.

People think maybe the missing pieces might be in the other things we don’t understand about the brain. It makes sense- it does what we want, so the answer must be in there somehow. I agree we don’t need to perfectly understand it, it just seems like a good place to keep looking for those missing pieces.

ethbro
This is exactly what the state of the world was immediately prior to the Wright brother's flight.

There are two possible scenarios:

(1) We are a single, sudden breakthrough from AGI

(2) We are decades, if not centuries, away from being able to build AGI

We'll only know in hindsight...

toomuchtodo
> and so it is largely an intuition + trial and error, which is a slow unreliable way to make progress.

It is only slow and unreliable if you don't make advances in reducing the barrier of entry to try and fail.

wetpaws
My point is that while the brain and neurons are very complex and inherently confusing, there are billions of lifeforms that operate on this architecture and do not display sentience or intelligence.

Secondarily, just because neurons are complex on technical level, it does not mean that they should be complex on logical level.

For example, in computers if you would look at the CPU structure, on a low level you have quantum effects and tunneling and very insane stuff but on a logical level you are dealing with very trivial boolean logic concepts.

I would be not be surprised in a slightest if copying and reverse engineering neurons per se would not be necessary and defining aspect of anything related to AGI.

tiborsaas
We are and in a sense we know how they work. It's called swarm intelligence which does not even require neural nets to begin with.

OP probably just wanted to downplay the current state of AI.

macleginn
We still cannot convincingly model behaviour of even simplest individual organisms whose neural circuitry we know in minute detail.
tiborsaas
What do you mean by "model behavior"? We have AI systems that can learn walking, running and other behavior with just trial and error, I would call that simple behavior.

Now here's a more advanced example to teach a virutal character how to flex in the gym: https://www.youtube.com/watch?v=kie4wjB1MCw

That's a bit more advanced than simple walking.

Here's a deployed AI to a real robot "crab":

https://www.youtube.com/watch?v=UMSNBLAfC7o

How about virtual characters learning to cooperate?

https://www.youtube.com/watch?v=LmYKfU5O_NA

misterman0
"We have AI systems that can learn walking, running and other [...]"

In one of your examples, which are all of narrow AI, we see a mechanical crab powered by ML that has become specialized in walking with a broken limb, which is not even close to what we need if we aim for AGI. For AGI we don't need agents that mimic simple behavior. In my opinion, _mimicking_ behavior will not lead to AGI.

What _will_ lead to AGI? No one knows.

kowdermeister
macleginn's complaint was that we haven't even modelled simple behavior and I brought these narrow AI examples as a counter argument since they demonstrate that we can, even complex ones. Domain specific? Yeah, bummer.

Nowhere I have stated this is the clear path to AGI and you are right, we are missing key building blocks. But I feel like there's too much skepticism agains this field while the advancements are not appreciated enough.

I don't know either what will lead there, but I see more and more examples of different networks being combined to achieve more than they are capable of individually.

iguy
> macleginn's complaint was that we haven't even modelled simple behavior

No, the complaint was about modelling the behavior of simple organisms.

Certainly we can model some of their behaviors, many of which are highly stereotyped. But the real fly (say) doesn't only walk/fly/scratch/etc, it also decides when to do all of these things. It has ways to decide what search pattern to fly given confusing scents of food nearby. It has ways to judge the fitness of a potential mate, and ways to try to fool potential mates. Our simulations of these things are, I think, really terrible.

tiborsaas
I linked to modeled organisms, I always feel the HN crowd expects academic level of precision and discussions, but that kills regular discussions I would have at dinner tables with friends, I wish it would be a more casual place. Yes, I meant "behavior of simple organisms" :)

Since everything here is loosely defined I feel it's totally pointless to discuss AI, but it's still an intriguing topic. If you look at those insects, they tend to follow brownian motion in 3D, get food and get confused by light, we can get an accurate model of them and more [0].

The key word here is to model, not replication. Simulations are just that, simulations. Given current examples of what's already possible if someone wanted to, could model a detailed 3D environment with physics, scents and food for our little AI fly.

[0] https://www.techradar.com/news/ai-fly-by-artificial-intellig...

Is that a terrible attempt?

misterman0
Terrible? Not at all.

I'm sorry and apologize if you feel I was one to kill the discussion you wanted to have around AI.

I'm one of those dreamers who think AGI is or at least should be possible, soon, through means we have not yet discovered but will, soon. I base that on absolutely nothing, I suppose, other than the fact we have lots of "bright/smart/crazy" devs working on it. It's my own personal "believie", as Louis CK would say about things we believe in but cannot or care not prove.

Just like you I'm looking at organisms much simpler than us as a way forward. Many specialized neural network does not make up AGI, is what I think. Is it the organic and human neuron we should model? I don't necessarily think so. Also. robotics + ML is a dead end to me. An amoeba that can evolve into something more complex, is perhaps what we should model.

pron
We can model some aspects of insect behavior. The simulation even looks convincing at first glance (just as simple "AI" looks convincing with a superficial examination of a conversation or text-generation). But we have not been able to fully model the behavior of, say, a bee (which may be enough to solve self-driving cars and then some).
iguy
Exactly.

> they tend to follow brownian motion in 3D

Well, their entire neural system exists to make deviations from Brownian motion. That's the whole point of being an animal not a plant. And doing it well is very very subtle.

First steps towards modelling such behavior can be super-interesting science, not a terrible use of time at all. They can capture a lot of truth about how it works. But like self-driving cars, the thing that kills you is usually a weird edge case, not the basic thing.

est31
Yes, if you assume the technical model of each neuron only having one scalar output bfloat16, then we could simulate insect brains right now. But the technical neuron model of sum of inputs plus sigmoid activation function is only an approximation.

Neurons communicate with each other with a multitude of neurotransmitters and receptors [1]. As a cell, each neuron is a complex organism of its own that undergoes transcriptomic and metabolic changes. We aren't even close to simulating all protein interactions in a single cell yet, let alone in millions of them.

Of course you could say that full protein simulation of an entire brain is not neccessary if we can build an accurate enough technical model of a single neuron. In fact, already now we have to apply a model of how we believe proteins behave as "properly" simulating interactions of two proteins (or one with itself) with lattice QCD approaches is beyond our computational capabilities. For protein interaction we have pretty good models already. But finding a model of all types of neurons in insect brains is right now an open, unsolved challenge.

[1] https://en.wikipedia.org/wiki/Neurotransmitter#List_of_neuro...

whatshisface
Lattice QCD is used for sub-nuclear simulations, proteins are studied with much more tractable methods based on regular quantum mechanics.
est31
Yes, that's my point: you don't need to simulate a protein with that tool because we have good enough models of higher level structures like atoms. And similarly we might find models for neurons that allow us to avoid full emulation all protein interactions. We figured out how atoms work before we figured out how nuclei work, but with neurons it's the opposite: we know/can figure out how the the parts (proteins) of the machine work but not how the entire machine works.
wyldfire
> we could simulate insect brains right now

AFAICT this suggests that we have the computational power but wouldn't it also be a significant challenge to create an accurate model for the brain simulation?

ryanmercer
>I can't tell if you're serious, but assuming you are,

I assume he is, given he is Greg Brockman the CTO and a co-founder. I know Sam Altman is similarly optimistic, having told me on multiple occasions something along the lines of 'I can't focus on anything else right now' which in context I very much took as 'this presently consumes my waking thoughts and I only have time for it'.

This sort of drive is great, but I don't think it necessarily makes it true. Mr. Altman is financially independent, he needn't worry about things like rent or putting food on his table and I imagine Mr. Brockman is also independently wealthy (or at least has several years of a cushion if his OpenAI salary were to suddenly dry up), perhaps not as much though, given his previous position at Stripe.

These two, and perhaps other members of the team, can be overly optimistic about their passion. Both of them have this view, and they both co-founded OpenAI. This optimism and enthusiasm, and interesting project successes so far, certainly gives them steam and attention but how many aspiring athletes think they're going to to get drafted for tens of millions of dollars when in reality they might be lucky to get scouted by a college, or lucky to get drafted to a European or Asian league and not necessarily a major league US team. How many musicians think they'll get into Juilliard and go on to some top-tier symphony/orchestra, or will be the next Country/Rock/Rap/Pop star that takes the world by force, only to end up playing music with their friends at some dive bar a few times a year despite their enthusiasm and skill?

I think a major problem OpenAI has, which I've expressed to Altman, is that they suffer what Silicon Valley in general does. They are myopic, their ranks are composed of people that are 100% behind AI/AGI, they dream about AGI, they want to create AGI, they absolutely think we will have AGI, they want AGI with every fiber of their being. They're high in the sky with apple pie about AGI.

But who's going "hey wait a minute guys" and climbing up a ladder to grab them by the cuff of their pants to pull them back down to the floor and tie a tether to their leg? As far as I know, no one under their employ.

I think OpenAI needs to bring in some outsiders, have a team internally that roles a sanity check, and probably a board member as well. I think it is very dangerous to only have people working on your project that are overly optimistic. It reminds me somewhat of the movie Slingblade, a lawnmower is taken to be repaired and the folks don't know why, they present it to Billy Bob Thornton's Character that has some sort of mental deficit, he looks at it briefly and states "It ain't got no gas". He has a different perspective of the world, he sees things differently, this allows him to see something that the others overlooked. While gobs of gobblygook code and maths is a far different thing than a lawn mower not having fuel, I still think there is a danger in having one of the greatest STEM projects mankind has ever attempted only staffed by a bunch of coders, in a field that is effectively new, that largely have the same training and same life experiences.

Here's a portion of what I said to Mr. Altman back in May of this year and I think it applies more than ever, that isn't exactly related to this comment chain but maybe posting it here will get it seen by more people at OpenAI:

---

You are aware, you guys are in a bubble there. People in the Bay Area are at least peripherally aware of what Artificial Intelligence is presently and could be. For the bulk of the country, and the majority of the world, people are largely clueless. If you say ‘artificial intelligence’ people either have no idea what you are talking about (even people in their 20s and 30s which was shocking to me) or something like HAL 9000, Skynet, Colossus: The Forbid Project, etc come to mind. I think the industry, and OpenAI especially, are missing out on an opportunity to help educate people on what AI can and will be, how AI can be benevolent and even beneficial.

OpenAI is missing out on an opportunity here. While the bulk of resources obviously need to go to actually pursuing research, there is so much you could be doing to educate the masses, to generate an interest in the technology, to get more people passionate about/thinking about machine learning, AI and all of the potential applications.

...possible examples given...

You need to demystify AI Sam, you need to engage people outside of CS/Startup culture, engage people other than academics and venture capitalists.

...more examples given...

---

I will point out in that same exchange I told him that I thought raising the billions OpenAI would need is laughable, well I'll take a healthy bite out of my hat. They managed to raise a billion from a single source, bravo.

I had the pleasure of visiting OpenAI towards the end of Spring '18 and certainly from what I saw they are very serious towards their goal and aren't joking about believing 100% that AGI is an obtainable goal within their reach.

It's also worth noting I applied to OpenAI in the past year, post my visit, for their "Research Assistant, Policy" position and that I was somewhat miffed by the form rejection which, from outside of STEM, seems very cold:

>We know that our process is far from perfect, so please take this primarily as a statement that we have limited interview bandwidth, and must make hard choices. We'd welcome another application in no fewer than 12 months - the best way to stand out is to complete a major project or produce an important result in that time

I still haven't a clue as to what major project or important result, that I can achieve in researching policy for Artificial Intelligence given that:

- Artificial intelligence doesn't exist

- No one has created policy for it outside of science fiction

I may have not been the most qualified, which is fine, as I lacked the 4-year degree they had listed as a requirement, but a human being never once talked to me, never once asked me a question, just a web form and a copy-paste email with my first name inserted.

We don't always need someone with a stack of degrees, that is 100% pro-AI, that has programming experience, to help research policy and presumably lay the groundwork for both OpenAI and the industry. I think a team like that should only involve 10-20% individuals that are experienced in the field, I think you need a diverse team, with diverse experience, with diverse backgrounds. If an AGI is developed, it won't just serve the programmers of the world, it won't just have an impact on their life, STEM folks are far outnumbered by those with no STEM backgrounds.

Who is representing the common human in this? Who's going "are you sure this is a good idea" "should we really be training it with that data" "is it really in the best interests of humanity to allow that company/entity to invest or to license this to these types of causes?"

But hey, what do I know?

pron
https://www.buzzfeednews.com/article/tedchiang/the-real-dang...
ryanmercer
Exactly, and if they do create an AGI it's probably going to be a lot like the creators as its original parameters were set by them.

Just look at Amazon's warehouse algorithm:

~Biotic unit achieved goal, raise goal~

~Biotic unit achieved new goal, raise goal~

~Biotic unit achieved new new goal, raise goal~

~Biotic unit failed new new goal, replace biotic unit~

~New biotic unit failed new new goal, replace biotic unit~

~New new biotic unit failed new new goal, replace biotic unit~

With Amazon though, a human can eventually go "wow, w're firing new hires within the first 3 weeks like 97% of the time, and 100% within 6 weeks, erm, let's look at this algorithm".

But if you create an AGI that has the Silicon Valley mindset "we will do this, because we have to do this" (an exact quote I heard from an individual while in the Bay Area to stop by OpenAI : "We will figure out global warming, because we have to") then the AGI is probably going to be designed with the 'mindset' that "Failure is not an option, a solution exists, continue until solution is found" which, uh, could be really bad depending on the problem.

Here's a worst case scenario:

"I am now asking the computer how to solve climate change"

~~beep boop beep boop, beep beep, boooooooop~~ the CO2 emissions are coming from these population centers.

~~boop beep beep beep boop boop boop boop beep boop~~ nuclear winter is defined as: a period of abnormal cold and darkness predicted to follow a nuclear war, caused by a layer of smoke and dust in the atmosphere blocking the sun's rays.

~~boop beep beep, boop~~ Project Plowshare and Nuclear Explosions for the National Economy were projects where the two leading human factions attempted to use nuclear weaponry to extinguish fires releasing excessive carbon dioxide as well as for geoengineering projects. Parameters set, nuclear weapons authorized as non-violent tools.

~~beep beep beep boop boop boop, beep boop, beep, boop, beep~~ I now have control of 93% of known nuclear weapons, killing the process of 987 of the most populous cities will result in sufficient reduction for the other biotic species to begin sequestering more carbon than is produced, fires caused by these detonations should be minimal and smaller yield weapons used as airbursts should be capable of extinguishing them before they can spread. Solution ready. Launching program.

Watch officer at NORAD some time later "Shit, who's launching our nuke?!?"

Someone else at NORAD "they're targeting our own major population centers!"

Somewhere in Russia "Our nuclear weapons are targeting our own cities!"

Somewhere in Pakistan "our nuclear weapons are targeting our own cities!"

somewhere...

breck
> calling it "pre-AGI" pretentious to the level of delusion.

I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

That being said, you brought up some interesting points, even if I think your overall position is wrong--I think OpenAI is definitely going to hit "pre-AGI" if not AGI, and I do this stuff all day long.

pron
I was actively engaged in the field in the late-nineties when AI was also five years around the corner. I’ve mostly lost interest since then, and the disappointment that is deep learning has only dulled my enthusiasm further (not that it hasn’t achieved some cool things, but it’s a long way from where we’d thought we’d be by now).
breck
> I was actively engaged in the field in the late-nineties

So was Geoffrey Everest Hinton

> I’ve mostly lost interest since then

but he didn't give up.

If you expect someone to just hand us AGI in a nicely wrapped package with a bow, with all the details neatly described, you are absolutely right, that's really far off!

But for the record there are many people actively grinding it out in the field, day in and day out, who don't give up when things get hard.

chrshawkes
He's also telling us we're going in the wrong direction and have been with our approach to reinforced learning. He's not convinced that's how the brain works. In fact he's convinced it's not.
pron
The kind of language used in this release has actually hurt AI considerably before, so by pointing out that it's delusional I am not giving up and helping save AI from the research winter that OpenAI seems to be working on. You're welcome, AI!
breck
okay I'll concede your point that perhaps being bold could be bad publicity for the field. I think that's a reasonable position to take. I don't think it is correct, but I think it's reasonable. Even if it were the precursor to a drop in funding, I don't think the previous "AI Winter" was so long, in comparison to the century-long gaps in the advance of other technologies in history, (binary was invented hundreds of years before the computer).

I would definitely not call OpenAI delusional. I would say all OpenAI is being here is "honest".

They are simply stating what the math tells them.

"E pur si muove"

AlexCoventry
> They are simply stating what the math tells them.

Which math?

igorkraw
> > calling it "pre-AGI" pretentious to the level of delusion.

>I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

I do (or at least I try, I get money for my attempts though) and I concur with calling it delusion. So does Francois Cholet. So does Hinton to some degree, so does the founder of deepmind (or at least they did in 2017: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis... ).

I want to like OpenAI, I think they did the right thing with GPT-2 and I give them a lot of credit for publishing things. That being said, I remain skeptical about AGI, highly skeptical about AGI being feasible, or the thing to worry about. I always make the argument that either research towards controlling an AGI/AGI alignment is a techified version of reserach into the problem of good global governance (in which case it is an interesting problem that desperately needs solving), or it is useless (because no matter how nicely you control the AGI, a non-accountable elite within the current system, the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it) or it is delusional (because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity).

breck
> the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it

and

> because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity

Are very good points and I share those concerns too, and have no good answers. I'm in the pessimist camp when it comes to AGI--I would be heavily it's going to happen but I wouldn't bet a dollar whether it will end up being good for humanity, as I haven't a clue.

bo1024
I study ML, and I completely agree with the quoted statement. Deep networks have gotten pretty good at recognizing correlations in data. That's not on the same map as AGI. I don't know what "pre-AGI" means exactly, but I would include things like counterfactual reasoning or ability to develop and test models of the world, which are far from our AI capabilities so far. (edit: yes I am including RL, considering the relative performances of model-based vs model-free, I think this is a fair statement. Don't mean to be pessimistic, just realistic and trying to set expectations to avoid more winters.)
breck
To be clear, I don't think Deep Learning = AGI. I think it's just one important piece, but I think we are also making many other rapid advances in relevant areas (neuralink's 10x+ improvement in electrodes, for one).
fossuser
If AGI is achievable (seems likely given brains are all over the place in nature) and achieving it will have consequences that dwarf everything else then doesn't it make sense to focus on it?

Yes, historically people were way too optimistic and generally went down AI rabbit holes that went nowhere, but two years before the Wright flyer flew, the Wright brothers themselves said it was 50 years out (and others were still publishing articles about human flight being impossible after it was already flying).

People are bad at predictions, in the Wright brothers case since they were the people that ultimately ended up doing it two years later they were likely the best to make the prediction and were still off.

Given that AGI is possible and given the extreme nature of the consequences, doesn't it make sense to work on alignment and safety? Why would it make sense to wait? If you accidentally end up with AGI and haven't figured out how to align its goals then that's it, the game is probably over.

Maybe OpenAI is on the right path, maybe not - but I think you're way too confident to be as sure as you are that they are not.

dragonwriter
> If AGI is achievable

It almost certainly is. Humans make new intelligences all the time.

> and achieving it will have consequences that dwarf everything else

It probably won't, humans make new intelligences all the time. Hanging the technology base for that doesn't have any significant necessary consequences.

A revolution in our ability to understand and control other intelligences might have consequences that dwarf anything else, with or without AGI, but that's a different issue, and moreover one whose shape is basically impossible to even loosely estimate without some more idea of what the actual revolution itself would be.

fossuser
The difference is in the scale of the intelligence, not just the technology.

It's not so much a new human like intelligence that runs on silicon, it's a general problem solving intelligence that can run a billion times faster than any individual human. This is the part I think you're underestimating.

If you have that without the ability to align its goals to human goals then that's a problem.

dragonwriter
> The difference is in the scale of the intelligence, not just the technology.

AGI is inherently no greater in scale than human intelligene, so scale is not a difference with AGI, though it might be with AGsuperI. But that's a different issue than mere AGI, and may be impossible or impractical even if AGI is doable; we have examples of human-level intelligence so we know it is physically acheivable in our universe, we don't have such examples for arbitrarily capable superhuman intelligence.

fossuser
I think that's somewhat of an arbitrary distinction likely not to exist in practice.

If you have an AGI you can probably scale up its runtime by throwing more hardware at it. Maybe there's some reason that'll prevent this from being true, but I'm not sure that should be considered the default or most likely case.

Biology is limited in ways that AGI would not be due to things like power and headsize constraints (along with all other things that are necessary for living as a biological animal). Human intelligence is more likely to be a local maxima driven by these constraints than the upper bound on all possible intelligence.

dragonwriter
> If you have an AGI you can probably scale up its runtime by throwing more hardware at it

Without understanding a lot more than we do about both what intelligence is and how to acheive it, that's rank speculation.

There's not really any good reason to think that AGI would scale particularly more easily than natural intelligence (which, in a sense, you can scale with more hardware: there are certainly senses in which communities are more capable of solving problems than individuals.)

> Biology is limited in ways that AGI would not be due to things like power and headsize constraints

Since AGI will run on physical hardware it will no doubt face constraints based on that hardware. Without knowing a lot more than we do about intelligence and mechanisms for achieving it, the assumption that the only known examples are particularly suboptimal in terms of hardware is rank speculation.

Further, we have no real understanding of how general intelligence scales with any other capacity anyway, or even if there might be some narrow “sweet spot” range in which anything like general intelligence operates, because we don't much understand either general intelligence or it's physical mechanisms.

mritchie712
This is especially true considering we're talking about software vs. hardware (airplane). A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.
Quarrelsome
> A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.

same goes for warp drives doesn't it?

The point is that people don't see how we build THAT out of these tools we currently have. We only build pastiches of intelligence today and either we have an arrogant view of the level of our own intelligence or we can't make THAT with THIS.

But maybe warp-drives, maybe world-peace too?

mritchie712
No, way wrong, there would be enormous hardware costs to building a warp drive. There are near zero costs (possibly) of building AGI.
Quarrelsome
Mythical man month. I feel like you're seriously underestimating how hard this is. Its got to be one of the greatest engineering challenges of our species and to flaunt that its "near zero cost" is offensive.
mritchie712
Not offensive in the least, it's a compliment to what our species has done to date. We are all standing on shoulders and the shoulders have never been higher. Think of the things you can build in a day that were impossible to build 30 years ago. To think that there isn't at least some chance someone will build AGI in the next 30 years is foolish. Again, I'm just saying there is a reasonable chance, like hitting a homerun. It's not likely for any given plate appearance, but given the number of games and players it happens every summer day.
Quarrelsome
We're making the process more opaque. How can that scale to AGI? We'll be stuck at 80% done for much longer.

I would posit that while its possible, it will take so long on this tech stack that we'll find another in the interim that will produce better results. I'm not convinced this branch is the winner.

mritchie712
Oh, do you mean how can Azure scale to AGI? I have no opinion on Azure, I just meant someone smart will figure it out. There are huge financial incentives to do so, when that happens, we (humans) figure shit out.
Quarrelsome
> Oh, do you mean how can Azure scale to AGI?

No, not in the slightest. I mean as we progress the dev cycles get harder and slower. Then we need more engineers and the administration of more engineers working together makes everything harder.

Have you ever considered that making a rock think might be one of the greatest engineering projects our species has ever taken on? Sure humans might figure it out but I'm of the belief it will take them a very long time to. In addition, I believe that in that timescale a different tech stack might show more promise. I'm not convinced this technological branch scales all the way to AGI.

mritchie712
Fair point, but I think admin needs have gotten lighter. It took 400k people to get us to the moon. I want to see the results of 400k engineers working independently or in small teams on AGI.
Quarrelsome
Sounds like a great idea and I'm all for it but I'm talking about the integration of that mess. It will be like trying to hit an ant on the moon with the precision of 18th century artillery.

> Well we've removed its irrational hatred of penguins but now it struggles with the concept of Wednesday again...

fossuser
Not really - we're not sure if warp drive is possible given the physical constraints of the universe.

AGI is possible because intelligence is possible (and common on earth) in nature.

StreamBright
Following your reasoning, flying with the speed of light is possible because photons travel with the speed of light. We are not photons though. Is it possible or not to travel with the speed of light with a spaceship?
fossuser
Warp drives travel faster than the speed of light? That's what I meant by not possible.

Ignoring that if we saw miniature warp drives everywhere around us in nature then yes I would be more confident they were possible.

StreamBright
I see your point I just wanted to point out that there are different challenges for us than for nature. Flying like a bird has dirrent challanges than flying with a Boeing 747, even though these challenges might share a subset of the physics, like the Bernoulli’s principle.
fossuser
Yep - I think that's fair and a good analogy.

Similarly to how human airplanes don't flap their wings like birds, there will probably be implementation differences that make sense but share the underlying principles. Particularly since the artificial version isn't constrained by things biology needs to handle.

Quarrelsome
> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.

mikorym
Why would AGI dwarf anything?

There are at least 7 billion beings on the planet with AGI already. I think a bigger problem is the general well being of the aforementioned 7 billion entities.

sverige
AGI would dwarf those 7 billion people because it would concentrate tremendous power into the hands of a very few.

It's the dream of being one of the few who gets to control and direct that concentrated power that fuels these dreams, which is why it's imperative that they dress it up in the language of benefiting society.

The essence of the ethical problem with AI is that there is no person or small group of people who can be trusted to use such power without creating a real dystopia for the rest of us.

fossuser
I think this is a pretty big misunderstanding of the AGI issue.

Nobody is going to control the hyper-intelligent AGI if its goals are not aligned with human goals more generally. That's the nature of something being a lot smarter than you with its own goals.

sverige
Wait, is the claim that intelligence alone determines who is in control? I've certainly seen lots of examples where people were controlled by others, even though they were orders of magnitude more intelligent than those who had power.

Are the people who are trying to make AI a reality planning to give away the ability to unplug the machine it will inevitably depend upon, or give it the ability to control nuclear weapons so that it can wipe out humanity before that happens, like a bad movie script? It really does seem that ridiculous to aver that the creators of AI, if it ever comes to be, won't retain ultimate control over what it can do in the physical world.

Whatever its goals end up being, they will be aligned with Microsoft's goals if OpenAI gets there first. That's what the billion dollars is meant to ensure.

fossuser
I don’t think humans are orders of magnitude apart.

The difference between the world’s smartest human and its dumbest human is tiny relative to the possible spectrum of intelligence.

Do you see the smartest chimps tricking or controlling humans?

If you wanted to trick a chimp to do something you wanted it to do, do you think it could stop you? And we’re probably a lot closer to chimp intelligence than an AGI would be to us.

fossuser
For a simple thought experiment take one human brain architecture (say it can operate at 100 operations per second), scale that up to a billion operations per second. It can think centuries worth of human thinking in a couple of hours.

If you have an AGI that has goals that are not aligned with your interests it'll dwarf everything else because it thinks faster than you (and can therefore act faster than you) in pursuit of its goals.

qqqwerty
But human intelligence is weird. It's not clear to me that increasing the speed of my brain would really accomplish much in my day-to-day life. A lot of the value that I add happens during these 'Eureka' moments, often triggered when I am working on a different problem, taking a break, or after a good nights sleep. Adding more processing speed may or may not make that process more scalable.

And another thing to consider, is that in the real world success is not easy to define and it is only loosely correlated with intelligence. We have 7 billion people, each attempting random little variations on 'succeeding at life'. And the 'winners' generally require that some of the 7 billion people agree to 'reward' them (i.e. by giving them money). My last 3 purchases were watermelon seeds for my garden, a pair of jeans, and a dinner at a Vietnamese restaurant. It's not clear to me how AI would take over any of those transactions. Maybe make the jean manufacturing more efficient, but the price I paid was already pretty low.

fossuser
Sure it happens in Eureka moments which for you are when you take a break that might be a few hours, but if you're running a billion times faster then a few hours turns into a billionth of that time. That's what I'm trying to get at as an example - even assuming the exact same architecture otherwise.

For the real world success part that's where goal alignment comes in. If we're going to solve things like dealing with the sun burning out, becoming an interplanetary species, or death then having an AGI that can work on these problems with us (or as part of us if Neuralink can succeed on what they want to do) will be a big deal.

It sounds crazy, but I think success here is a lot bigger than automating what clothes you were going to buy. Incentive based systems like capitalism work pretty well, but not being able to coordinate effectively at scale is a major source of current human problems, theoretically a goal aligned AGI could do that, or at least help us do it.

apta
> If AGI is achievable (seems likely given brains are all over the place in nature)

I don't see how that conclusion follows the antecedent.

fossuser
Brains aren't magical, if the laws of nature allow for them to exist in nature and we see generalized intelligence develop and get selected for repeatedly then that suggests it can be done - it's just a matter of knowing how.
jacques_chester
Our struggle to understand the brain suggests that "just a matter of knowing how" might take a while.
clmul
It might also be a possibility that this isn't possible to be replicated in an artificial way (that is, human beings maybe aren't smart enough to ever understand their own intelligence, even using all tools to their disposition)

In a certain way, brains are so complicated that (at least for the moment) they seem quite magical to us

fossuser
Things often seem magical until they're understood.

As far as humans not being able to ever understand it, I guess that could be true but I wouldn't bet on it.

pron
First of all, I was talking the language, not the work. It makes sense to study AI as it does many other subjects, but we don't know that it will "have consequences that dwarf everything else" because we don't know what it will be able to do and when (we think that it could but so could, say, a supervirus, or climate change, or the return of fascism). People hang all sort of dreams on AI precisely because of that. That cult I mentioned, The Rationalists, basically imagined AI to be a god of sorts, and then you can say "wouldn't you want to build a god?" But we don't know if AI could be a god. Maybe an intelligent being that thinks faster than humans goes crazy? Of course, we don't know that, but my point is that the main reason we think so much of AI is that at this time, we don't know what it is and what it could do.

> Why would it make sense to wait?

Again, that's a separate discussion, but if we don't know what something is or when it could arrive, it may make more sense to think about things we know more about and are already here or known to be imminent. Anyway, anyone is free to work on what they like, but OpenAI does not know that they're "building artificial general intelligence."

> I think you're way too overconfident to be as sure as you are that they're not.

I don't know that they're not, but they don't know that they are, and that means they're not "building AGI."

joe_the_user
We don't know whether AGI is possible or even exactly what it is. However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things. A device that's akin to an army of well-organized brilliant people in a box clearly would many capacities. So it's reasonable to say that if that's possible, investing in it may have a huge payoff. (Edit: the "strong" "AGI is possible" would be that AGI is an algorithm that gives a computer human-like generality and robustness while having ordinary soft-like-abilities. There are other ideas of AGI, of course - say, a scheme that would simulate a person on such a high level that the simulated person had no access to the qualities of the software doing the simulation but that's different).

The problem, however, I think another gp's objection. OpenAI isn't really working on AGI, it's making incremental improvements on tech that's still fragile and specialized (maybe even more specialized and fragile), where the only advance of neural nets is that now they can be brute-force programmed.

clmul
> However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things.

That's a very big if... Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together... Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability? more memory? what would the use of this be? all things we can't really oversee), it raises the question whether this would ever be possible in a cost-efficient way (human intelligence seems like it is, in a certain way, "cheap").

joe_the_user
>That's a very big if...

Oh, this is indeed a big if. A large, looming aspect of the problem is we don't anything like an exact characterization of "general intelligence" so what we're aiming for is very uncertain. But that uncertainty cuts multiple ways. Perhaps it would take 100K human-years to construct "it" and perhaps just a few key insights could construct "it".

> Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together...

The nature of a problem generally determines the sort of human-organization one needs to solve a problem. Large engineering problems are often solved by large teams, challenging math problems are generally solved by individuals, working with published results of other individuals. Given we're not certain of the nature of this problem, it's hard to be absolute here. Still, one could be after a few insights. If it's a huge engineering problem, you may have the problem "building an AGI is AGI-complete".

> Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability?

I've heard these "we'll get to human-level but it won't be that impressive" kinds of arguments and I find them underwhelming.

"What use would more memory be to an AGI that's 'just' at human level?"

How's this? Studying a hard problem? Fork your brain 100 times, with small variations and different viewpoints, to look at different possibilities, then combine the best solutions. Seems powerful to me. But that's just the most simplistic approach and it seems like an AGI with extra-memory could jump between the unity of an individual and the multiple views of work groups and such is multiple creative ways. The plus humans have a few quantifiable limits - human attention has been very roughly defined as being limited to "seven plus or minus two chunks". Something human-like but able to consider a few more chunks could possibly accomplish incredible things.

fossuser
I can understand your point about the language, but I guess I think it's reasonable to set the goal for what you actually want and work towards it. It may turn out to be unattainable, but I think generally you need to at least set it as the goal. It also seems less clear to me that they are close or far from it (I don't think it's on the same level as warp drive).

I don't know about the god thing you mention and the rationalist stuff I've read hasn't been about that. The main argument as I understand it is:

1. AGI is possible

2. Given AGI is possible if it's created without the ability to align its goals to human goals we will lose control of it.

3. If we lose control of it, it will have unknown outcomes which are more likely to be bad than benign or good.

Therefore we should try and figure out a way to make it safe before AGI exists.

Maybe humans just happen to be an intelligence upper bound and anything operating at a higher level goes crazy? That seems unlikely to me given that humans have a lot of biological constraints (heads have to fit out of birth canals, have to be able to run on energy from food, selective pressure for other things besides just intelligence). You could be right, but I'd bet on the other side.

The last bit is if we can solve this in a way that aligns the goals with human goals (open question since humans themselves are not really aligned) we could solve most problems we need to solve.

clmul
> Therefore we should try and figure out a way to make it safe before AGI exists.

Makes no sense to me, how would you ever be able to figure out a way to make something safe before it even exists?

Someone who has never built a nuclear reactor most likely could not think of a way to prevent the Chernobyl disaster.

(OK, maybe this is a wrong example as someone who did couldn't do this either, but the point should be clear)

fossuser
I think the argument is that decision theory and goal alignment can be worked on without knowing all the details about how an AGI will work.

https://intelligence.org/2016/12/28/ai-alignment-why-its-har...

wasoante
ah yes Yudkowsky, the well established AI researcher & definitely not a crank
dang
Personal attacks are not ok here, regardless of whom you're attacking. Can you please not post like this to HN?

https://news.ycombinator.com/newsguidelines.html

pron
I think discussions of AI safety at this stage -- when we're already having problems with what passes for AI these days that we're not handling well at all -- is a bit silly, but I don't have something particularly intelligent to say on the matter, and neither, it seems, does anyone else, except maybe for this article that shows that the AGI paranoia (as opposed to the real threats from "AI" we're already facing, like YouTube's recommendation engine) may be a result of a point of view peculiar to Silicon Valley culture: https://www.buzzfeednews.com/article/tedchiang/the-real-dang...
fossuser
I agree with you in a way, if AGI ends up being 300yrs out then work on safety now is likely not that important since whatever technology is developed in that time will probably end up being critical to solving the problem.

My main issue personally is that I'm not confident if it's really far out or not and people seem bad at predicting this on both sides. Given that, it probably makes sense to start the work now since goal alignment is a hard problem and it's unknown when it'll become relevant.

I read the BuzzFeed article and I think the main issue with it is he assumes that an AGI will be goal aligned by the nature of being an AGI:

"In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind."

Humans have general preferences and goals built in that have been selected for for thousands of years. An AGI won't have those by default. I think people often think that something intelligent will be like human intelligence, but the entire point of the strawberry example is that an intelligence with different goals that's very good at general problem solving will not have 'insight' that tells it what humans think is good (that's the reason for trying to solve the goal alignment problem - you don't get this for free).

He kind of argues for the importance of AGI goal alignment which he calls 'insight', but doesn't realize he's doing so?

The comparison to Silicon Valley being blinded by the economies of their own behavior is just weak politics that's missing the point.

pron
We don't know that "goal alignment" (to use the techo-cult name) is a hard problem; we don't know that it's an important problem; we don't even know what the problem is. We don't know that intelligence is "general problem solving." In fact, we can be pretty sure it isn't, because humans aren't very good at solving general problems, just at solving human problems.
Nov 08, 2018 · 72 points, 70 comments · submitted by pshaw
simonh
He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction. If it's on the same spectrum, then it's just a quantitative measure of where on the spectrum it lies. If it's qualitatively different, it's on another axis, another quality is in play.

His definition is also rubbish. Being useful at economically valuable work has nothing necessarily to do with intelligence. Writing implements are vital in pretty much all economic activities, many couldn't be done at all without them before keyboards came along.

Deep Learning is great, it's a revolution, but it's a fairly narrow technology. It solves one type of task fantastically well, it just happens that solving this task is applicable in many different problem domains, but it's still only one technique. At no point did he show how to draw a line from Deep Learning to General AI in any recognisable form. It just looks like a hook to get you to hear his pitch.

It's a great pitch, but it's not about AGI.

wycs
>He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different.

No it is not. The basic premise of fixed-winged aircraft was the same from Wright brothers to modern jets. Yet the wright brothers flyer was useless and a modern jet is not.

We have agents that can act in environments. His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now. This just does not strike me as an absurd claim. We have systems that can learn reasonably robustly. We should accord significant probability to the claim that higher-level reasoning and perception can be learned with these same tools given enough computing power.

He claims we cannot "rule out" near-term AGI. Let's define "rule out" as having a probability of 1% or lower. I think he's given pretty good reasons to up our probability to between 2-10%. For myself, 10-20% seems a reasonable range.

spuz
> No it is not.

What claim are you responding to here? Simonh said:

> He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction.

Which I agree with. How can two qualitatively different things be on the same spectrum? You later say yourself:

> His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now.

Which seems to be the opposite of what simonh said and it's confusing to say the least.

None
None
None
None
wycs
You are right. I don't think I read his comment very carefully before replying.
gugagore
I think what I found most lacking in this video is that data did not play a role in the overview. The presenter discusses that a ton of computation is needed to do deep learning, but doesn't explain why. And really, it's because the models are big and the training data is even bigger. So computation improves, and helps you deal with bigger models and bigger data, but where does the data come from?

The big question to me isn't whether computation can scale, which this video makes me belief it will. It's whether the data will scale. In RL domains with good simulators, such as Go and the Atari games, data doesn't seem to be an issue. The in-hand robot manipulation work also makes heavy use of simulators to reduce the amount of real-world time needed to collect data. But I don't see an argument that we will need to get high-fidelity simulators to train these agents in.

I do love the in-hand robot manipulation work, because it's one of the few that shows that results from simulation can be applied to real robotic systems. And while I hope for the sake of robotics that we can get better and better simulators, it's surprising to not see that as the central focus in conversation about getting AGI to emerge from gradient descent on neural networks.

gdb
(I gave the talk.)

We are already starting to see the nature of data changing. Unsupervised learning is starting to work — see https://blog.openai.com/language-unsupervised/ which learns from 7,000 books and then sets state-of-the-art across almost all relevant NLP datasets. With reinforcement learning, as you point out, you turn simulator compute into data. So even with today's models, it seems that the data bottleneck is much less significant than even two years ago.

The harder bottleneck is transfer. In most cases, we train a model on one domain at a time, and it can't use that knowledge for a new related task. To scale to the real world, we'll need to construct models that have "world knowledge" and are able to apply it to new situations.

Fortunately, we have lots of ideas about how this might work (e.g. using generative models to learn a world model, or applying energy-based models like https://blog.openai.com/learning-concepts-with-energy-functi...). The main limitation right now: the ideas are very computationally expensive. So we'll need engineers and researchers to help us to continue scaling our supercomputing clusters and build working systems to test our ideas.

iotb
What are your thoughts on Starting over completely from scratch as Geoffrey Hinton has suggested? What are you doing as a group to attract and bring on such individuals? Does this occupy any portion of your efforts at OpenAI?

If you were given a demo of an AI system that uses a completely new/revolutionary approach towards various different problems with success, how open would you be to rethinking your position on 'Optimization techniques'?

Modeling seems as a stop-gap towards getting over the limitations of Weak AI. As I recall, this is what knowledge-based expert systems tried in a time's past and failed at because it's nothing but a glorified masking of the underlying problem with limited human inputted rulesets. I don't agree with Yann LeCun that the way forward to AGI is modeling. I feel like it's the best solution people worked up towards the limitations of Weak AI which were broadly and publicly acknowledged in 2017 and early 2018.

> The main limitation right now: the ideas are very computationally expensive.

This is because the fundamental core set of algorithms being used by the industry are fundamentally flawed yet favorable to big data/cloud computing.. A quite lucractive business model for currently entrenched tech companies. It's why they spend so much effort ensuring the broad range of AI techniques fundamentally stay the way they are.. because if they do, it means boat loads of money for them.

> So we'll need engineers and researchers to help us to continue scaling our supercomputing clusters and build working systems to test our ideas. When you're attempting to resolve something and you are shown YoY that it isn't being resolved and requires even more massive amounts of compute, it means you're doing something wrong. It will be better to take a step back an re-evaluate your approach fundamentally. Again, what is the willingness you have to do so if shown something far more novel?

blueadept111
Yes, we can rule out near-term AGI, because we can also rule out far-term AGI, at least in the way AGI is defined in this talk. You can't isolate "economically beneficial" aspects of intelligence. Emulating human-like intelligence means emulating the primitive parts of the brain as well, including lust, fear, suspicion, hate, envy, etc... these are inseparable building blocks of human intelligence. Unless you can model those (and a lot else besides), then you don't have AGI, at least not one that can (for example) read a novel and understand what the heck is going on and why.
Florin_Andrei
It's how I feel about it too. All the structures that provide motivation, drive, initiative - are mandatory. Those have evolved first in the natural world, for reasons that I think are semi-obvious. Complex intelligence has emerged later.
visarga
I think we need to have an agent centric approach. To view the world as a game, with a purpose, and the agent as a player learning to improve its game and the understanding of the world. Interactivity with humans would be part of the game, of course, and the AI agent will learn the human values and the meaning of human actions as a byproduct of trying to predict its own future rewards and optimal actions. Just like kids.
londons_explore
Fruit flies have a fear of water.

It's pretty simple to model and predict, either for a human or a deep net given some training data.

"Emulating these primitive parts" isn't some impossibility.

isseu
Are lust, fear, hate requirements for intelligence? There are part of human intelligence for sure. I feel the a problem is that we don't have a good definition of intelligence.
byteface
Yes. We learn through our emotions and use them for heuristics. They are measures of pleasure/stress against access to maslows needs. This drives instincts and behaviours. Also gives us values. When I 'think' or act I use schemas but don't knowingly use a GAN or leaky Relu. I personally learn in terms of semantic logic, emotions and metaphors. My GAN is the physical world, society, the dialogical self and a theory of mind. He never mentioned amygdala or angular gryus or biomimmicking the brain or creating a society of independant machines. Which we could do but aren't even trying to my knowledge? I mean there's Sophia(a fancy puppet) but not much else.

We get told to use the term .agi despite public calling it .ai as that's just automation. But this feels like we're now allowed to call it .ai again? It was presented as, given these advances in automation we can't rule out arriving at apparent consciousness. But with no line between.

We do have a definition for intelligence. Applied knowledge.

However here's another thought. Several times in my life I knowingly pressed self destruct. I quit a job without one to go to despite having mortgage and kids. I sold all my possessions to travel. I've dumped a girls I liked to be free. I've faced off against bigger adversaries. I've played devils advocate with my boss. I've taken drugs despite knowing the risks etc... And I benefitted somehow (maybe not in real terms) from all of them. Non of these things seem like intelligent things to do. They were not about helping the world but about self discovery and freedom. We cannot program this lack of logic. This perforating of the paper tape (electric ant). It's emergent behaviour based on the state of the world and my subjective interpretation of my place in it. Call it existential, call it experiential, call it a bucket list. Whatever.

.agi would need to fail like us, to be like us. Feel an emotional response from that failure. And learn. Those feelings could be wrong. misguided. We knowingly embrace failure as anything is better than a static state. i.e people voting Trump as Hilary offered less change.

We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music plays my body seems to respond before my brain intellectually engages. So we need to consider physiological as well as phsycological. We have more that 2000 emotions and feelings (based on a list of adjectives). But that probably only scratches the surface. What about 'hangry'? Then learning to recognise and regulate it.

diff( current perception of world state, perception of success at creating a new desired world state (Maslow) ) = stress || pleasure.

Even then how do you measure the 'success'? i.e.I have friends with depressions and they don't measure their lives by happiness alone. I feel depression is actually a normal response to a sick world and that people who aren't a bit put out are more messed up. If we created intelligence that wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate like we do with people.

Finally I don't think they can all learn off each other. They need to be individual. language would seem an inefficient data transer method to a machine. But we indivudate ourselves against society. Machines assimilating knowledge won't be individuals. More swarm like. We would need to use constraints which may seem counter productive so harder to realise.

Wow. I wrote more than I inteded there. But yes. Emotions are required IMO. Even the bad ones. Sublimation is an important factor in intelligence.

robertbenjamin
I really enjoyed reading this too!
isseu
I was expecting no response and found this. Thanks!
dfischer
I really enjoyed reading this. Thank you. It relates to some thoughts that have been percolating. I’m actually giving a small internal talk on a few of these ideas.

Thanks!

mindcrime
The question is, do we want "human like" intelligence, or "human level" intelligence? I'd argue that they are two separate things, and that the term "AGI" as widely used, is closer to the latter. That is, we want something that can generalize and learn approximately as well as a human, but not necessarily something that will behave like a human.

Of course if your definition of AGI involves the ability to mimic a human, or maybe display empathy for a human, etc., then yeah, you probably do need the ability to experience lust, fear, suspicion, etc. And IMO, in order to do that, the AI would need to be embodied in much the same way a human is, since so much of our learning is experiential and is based on the way we physically experience the world.

oliveshell
I’m not too convinced by this guy’s argument: as evidence, he presents the progress made by deep learning/CNNs in the past few years. He then rightly acknowledges the difficulty of getting machines to do abstraction and reasoning, noting that we have ideas about how to approach these things but that they require much more computing power than we have now.

...Then he basically asserts that we can extrapolate the near-term availability of tons more compute power from Moore’s Law, which is where he lost me.

We’re already running into the limits of physical law in trying to move semiconductor fabrication to smaller and smaller processes, and there are very real and interesting challenges to be overcome before, I think, we can resume anything close to the exponential growth we’ve enjoyed over the last 40 years.

This guy may well think a lot about these difficulties, but not mentioning them at all made his argument sound incredibly naïve to me.

21
> We’re already running into the limits of physical law in trying to move semiconductor fabrication to smaller and smaller processes, and there are very real and interesting challenges to be overcome before, I think, we can resume anything close to the exponential growth we’ve enjoyed over the last 40 years.

That is irrelevant. You just scale horizontally, more and more data-centers. Sure, it will not be free like in the past.

wycs
>...Then he basically asserts that we can extrapolate the near-term availability of tons more compute power from Moore’s Law, which is where he lost me.

That's not what he's asserting. Even with Moore's law dead, OpenAI claims there is significant room with ASICs, analog computing, and simply throwing more money at the problem. There is a ton of low-hanging fruit in Non-Von Neumann architectures. We should expect it to be plucked, as we have huge use case which is potentially limitlessly profitable.

foobiekr
The video boils down to “more compute => AGI.” It’s silly and specious.
iotb
Accelerating the efficiency of an optimization algorithm doesn't get you AGI, this should be clear by now. As for fielding such systems, one quick way to destroy humanity to a degree is to turn everything into a glorified optimization problem which will no doubt be turned against people to maximize profit.
red75prime
If you'll accelerate AIXI (optimization algorithm) [0], you'll get (real-time) AGI.

[0]: https://en.wikipedia.org/wiki/AIXI

iotb
If only this wasn't an fundamentally flawed theory that isn't scalable based on computational complexity and information theory.
red75prime
Approximations to AIXI are computable.
simonh
That's how I take it as well, as I said in another comment it is a compelling pitch. And yes he's not talking about Moore's Law, but how much compute is actually being dedicated to DLNN's simply because the value of doing so is going up so fast.
mckoss
It does seem he is conflating "progress" with "investment". Yes, the world is spending exponentially more compute each year since 2012 on training networks. The marvel is that neural architectures are scaling to more complex problems without much architectural change. But this is not an argument that AI is getting more efficient or productive over time and hence we can expect exponential performance improvements (like Moore's law).
gambler
>"Prior to 2012, AI was a field of broken promises."

I just love how these DNN researchers love to bash prior work as over-hyped, while hyping their own research through the roof.

AI researchers did some amazing stuff in the 60s and 80s, considering the hardware limitations they had to work under.

>"AT the core, it's just one simple idea of a neural network."

Not really. First neural networks were done in the 50s. Didn't produce any particularly interesting results. Most of the results in the video are a product of fiddling with network architectures, plus throwing more and more hardware at the problem.

Also, none of the architectures/algorithms used by deep learning today are more general than, say, pure MCTS. You adapt the problem to the architecture, or architecture to the problem, but the actual system does not adapt itself.

habitue
So, they didn't have backprop and automatic differentiation in the 50s. That's pretty fundamental and not just "fiddling with architectures"
gambler
But being fundamental in this context is a bad thing.

It's not like there is a single "neural" architecture that's getting better and better. There are dozens of different architectures with their own optimizations, shortcuts, functions and parameters.

Cybiote
This statement is fairly inaccurate. If you check Peter Madderom 1966 thesis, you'll see that it states the earliest work on automatic differentiation was done in the 1950s. It's just that back then, it was called Analyitc differentiation. You can see many of the key ideas already existed back then, including research into specializations for efficiently applying the chain rule.

https://academic.oup.com/comjnl/article/7/4/290/354207

habitue
Ah, you're right on AD. But backprop was invented in the 80s
tomiplaz
The talk's title has very little to do with the actual talk. The talk is about progress in narrow AI in the last 6 years or so. While fascinating, it's still only progress in narrow AI. To make artificial intelligence general, one would have to somehow define a fitness function that itself is general. But how does one do that? How does one say to a machine: "Go out there and do whatever you think is most valuable"?

If some kind of goal has to be defined, it seems it will always be a narrow AI, where some outside entity defines what its goal is, instead of itself coming to a conclusion what it should do in general sense. Even if that machine is able to recognize the instrumental goals for reaching the final goal (and acting accordingly), it still feels like a non-general intelligence, like connecting the dots based on the available input and processing, just to come closer to that final goal. If no final goal was given, I presume such a machine would do nothing: it would not randomly inspect the environment around itself and contemplate upon it; there would be no curiosity, no actions of any kind to find out anything about its environment and set its own goals based on observation.

It seems that for AGI to come, some kind of spontaneous emergence would have to occur, possibly by coming up with some revolutionary algorithm for information processing implemented inside an extremely capable computer (something that biological evolution has already yielded).

It is interesting, humbling and a bit depressing to apply the same reasoning to us, humans. We are relatively limited in terms of reason, its just that it is not obvious to us, just like it is not obvious that the Earth is round, for example.

tlb
Having novel, unique goals is not necessary for AGI. Normal people don't have novel goals -- they mostly want love, comfort, and respect. An AGI that had as its goal "gain people's respect" would exhibit an unboundedly interesting range of behavior.
visarga
Well said. Humans (and all living things) have the same ultimate goal: life. They need to keep themselves alive somehow and keep their genes alive by reproduction. That single goal has blossomed into what we are today. If we train AI with an evolutionary algorithm, and let it fend for its needs (compute, repair, energy), then it could learn the will to life that we have, because all variants that don't have it will be quickly selected out of existence.

I think AGI could happen with today technology if we only knew the priors nature found with its multi-billion year search. We already know some of these priors: in vision, spatial translation and rotation invariance, in temporal domain (speech) it is time translation invariance, in reasoning it is permutation invariance (if you represent the objects and their relations in another order, the conclusion should be unchanged). With such priors we got to where we are today in AI. Need a few more to reach human level.

emtel
I think a lot of people who are emphatic that AGI is a long way off are saying so out of an allergic reaction to the hype, rather than due to sound reasoning.

(And let me be clear that none of this is an argument that AGI is near. I'm saying that confidence that it is far is unfounded.)

First, there are many cases in science where experts were totally blindsided by breakthroughs. The discovery of controlled fission is probably the most famous example. This shouldn't be surprising - the reason that a breakthrough is a breakthrough is because it adds fundamentally new knowledge. You could only have predicted the breakthrough if you somehow knew that this unknown knowledge was there, waiting to be found. But if you knew that, you'd probably be the one to make the breakthrough in the first place.

Second, most claims about the impossibility of near-term AGI are totally unscientific. By that, I mean that they aren't based on a successful theory of falsifiable predictions. What we'd want, in order to have any confidence, is a theory that can make testable predictions about what will and won't happen in the short term. Then, if those predictions turn out to be true, we can gain confidence in the theory. But this isn't what we get. What we get is people saying "We have no idea how to do x, y, and z, therefore it won't happen in the next 50 years". I don't see any evidence that people were able to predict even the incremental progress we've seen say, two years out. The fact is that when someone says "it'll take 50 years" that's just sort of a gut feeling, and people will almost certainly be making that same prediction the year before it actually happens.

Third, I think people have too narrow a view about what they imagine AGI might look like. People tend to envision something like HAL, that passes Turing tests, can explain chains of reasoning, and has comprehensible motivations. Let's consider the case of abstract reasoning, which is something thought to be very difficult. We tried and failed for decades to build vision systems based on methods of abstract reasoning, e.g. "detect edges, compose edges into shapes, build a set of spatial relationships between those shapes, etc". But humans don't use abstract methods in their visual cortex, they use something a lot more like a DNN. The mistake is in thinking that because the mechanism of successful machine vision resembles human vision, therefore the mechanism of successful machine reasoning must resemble human reasoning. But its quite possible that we'll simply train a DNN by brute force to evaluate causal relationships by "magic", i.e. in a way that doesn't show any evidence of the sort of step-by-step reasoning humans use. You can already see this happening - when a human learns to play breakout, they start by forming an abstract conception of the causal relationships in the game. This allows a human to learn really really fast. But with a DNN, we just brute force it. It never develops what we would consider "understanding" of the game, it just _wins_.

Sorry the third point was so long, let me summarize: We think some things are hard because we don't know how to do them the way that we think humans do them. But that doesn't serve as evidence that there isn't an easy way to do them that is just waiting to be discovered.

aligshow
> First, there are many cases in science

In my experience, artificial intelligence research is based on fear, paranoia, and science fiction rather than a search for truth. It seems the so-called "researchers" are really just kicking up dust to lure venture capital to support some play time.

Where are the hypotheses? Where are the problems? What math will be brought to bear? What materials science? What philosophy? What business practices? What psychological questions?

In general, what parts of academia will be engaged? And what research has already been done?

If artificial intelligence is related to science, then what solutions to scientific questions can artificial intelligence reveal? And which questions?

No one is saying artificial intelligence illuminates every scientific question equally.

Philosophers have been playing fast and loose with definitions for problems, using the so-called "problem of consciousness" or "problem of free will" to put the idea in the mind of the listener that a problem exists, is well-defined, and accepted as a problem of academic philosophy.

Is artificial intelligence related to the philosophical problem of consciousness? If so, perhaps someone can start with explaining what the philosophers are talking about.

https://www.ted.com/talks/john_searle_our_shared_condition_c...

Is artificial intelligence related to translation? If so, what passes for "translation science" in academia? Is there even such a thing, or has translation research been turned into a ghetto of linguistics and computer science?

Is artificial intelligence related to the soul? If so, why are we asking academic researchers for their opinions, shouldn't we be asking ministers and priests and witch doctors?

roh0sun
Worth considering these two points.

Inteligence is not one dimensional as is evolution => https://backchannel.com/the-myth-of-a-superhuman-ai-59282b68...

Silicon based intelligent machines might not energy efficient => https://aeon.co/essays/intelligent-machines-might-want-to-be...

emtel
On the first article, if you believe that a human is smarter than a chimp, and a chimp is smarter than a snail, then it raises the question of whether entities smarter than humans are possible. I suppose it might be difficult to precisely define a concept of intelligence that matches our intuition that humans are smarter than chimps, who in turn are smarter than snails. But such difficulty just says something about how facile you are with defining concepts. It doesn't have any bearing on how things play out in the real world.

On the second point, who cares? If the first AGI draws gigawatts, it will still be an AGI.

seagreen
Why do people keep reposting that first article? I encourage you to reread it with a critical eye.

A choice quote:

In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.

Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

Humans do not have general purpose minds, and neither will AIs.

This is awfully similar to "On the Impossibility of Supersized Machines" (https://arxiv.org/pdf/1703.10987.pdf).

EDIT: Re: energy efficiency: the problem is that humans are too energy efficient. Your brain can keep functioning after 3 days of running across the Savanna without food, which is (a) awesome, and (b) not really helpful nowadays. The cost of this is that you can only usefully use a little energy each day, say 4 or 5 burgers at most. AGI prototypes will usefully slurp in power measured in number of reactors.

iotb
Did Artificial General Intelligence get redefined again towards something more short term? So, first AI is hijacked and hyped 50 ways to sunday.. Then came the apocalyptic narratives/control problem hype to secure funding for non-profit pursuits and research. Then, when it was realized that narrative couldn't go on forever. AGI was hijacked added to everyone's charter and a claim was made that it will be made 'safe'. Now, AGI's definition is getting redefined to the latest Weak AI techniques that can do impressive things on insane amounts of compute hardware. How can you ever achieve AGI if this is the framing/belief system the major apparatus of funding/work centers on? Where is the true pursuit of this problem? Where are the completely new approaches?

One cannot rule out something unless they've spent a concerted amount of time dedicated solely to trying to understanding it. If there is no fundamental understanding of Human intelligene, what is anyone frankly talking about? or doing?

I have yet to hear a cohesive understanding of human intelligence from various different AI groups. I have yet to hear a sound body of individuals properly frame a pursuit of AGI. So, what is everyone pursuing? There seems to be no grand vision or lead wrangling in all of these scattered add-on techniques to NN. I do see a lot of groups working on weak AI or chipping away at AGI like featuresets with AI techniques making claims about AGI. Everyone has become so obsessed with iterating that they fail to grasp the proper longer term technique for resolving a problem like AGI.

Void from the discussion are conversations on Neuroscience and the scientific investigation of Intelligence. There's more sound progress being made in the public sector on concepts like AGI than in the private sector. Mainly because the public sector knows how to become entrenched, scope, and target an unproven long term goal and project.

The hype as far as I see it is clearly distinguishable from the science. Without honest and sound scientific inquiries, claims in any direction are without support. Everyone's attempting to skip the science and pursue engineering in the dark with flashy public exhibitions namely because of funding.. You can't exit such a room and make sound claims about AGI. If a group claims they are pursuing AGI, I expect almost all of their work to be scientific research pursuing an understanding.

That being said, it appears no one is interested in funding or backing such an endaevour. Everyone states they want to back/invest in such a group on paper but when it comes down to it the money isn't there, they obviously are targetting shorter term goals/payouts, and/or don't frankly know what type of pursuit or group of individuals are required. No one wants to take the time to understand what such a group would look like. No one wants to make a truly longer term bet. This is why things have been spiraling in circles for years.

So, as it has been stated time and time again.. AGI will come and it will come from left field. There are individuals who truly care to pursue and develop AGI and they're willing to sacrifice everything to achieve it. If no funding is available, they'll fund themselves. If groups wont accept them because they aren't obssesed with deep learning or have a PhD (clearly the makeup that only results in convoluted weak AI), they'll start groups themselves.

Passion + Capability + lifelong pursuit is how all of the great discoveries of time have come to us. The mainstream seemingly never understanding such individuals, supporting them, or believing them until after they've proven themselves. No pivots. No populist iterations. A fully entrenched dedication towards achieving something until its done.

So, no.. you can't rule out AGI in the near term because there is no spotlight on the individuals or groups with the capability to develop it on such time horizons and the thinking just frankly isn't there in celeberated groups with funding. Everyone's in the dark and its an active choice and mindset which causes this.

Geoffery Hinton says start all over.... Yann LeCun raises red flags. No one listens. No one acts. Everyone wants a piece of the company that develops the next trillion dollar 'Google' like product space centered on AGI but no one wants to spend the time to consider what such a company would be, what is human intelligence, who is looking at it in a new way from scratch as some of the most important people in AI have stated. So, you see... This is why the unexpected happen. It is unexpected because no one spends the time or resources necessary to cultivate the understanding to expect its coming.

cvaidya1986
Nope I’ll build it.
tim333
>Can we rule out near-term AGI?

In keeping Betteridge's law, no, not really. Hardware capabilities are getting there as evidenced by computers trashing us at go and the like and with thousands [0] of the best and brightest going into AI research who's to know when someone is going to find working algorithms?

[0] https://www.linkedin.com/pulse/global-ai-talent-pool-going-2...

zackmorris
Not sure why you're getting down voted for this since I was going to say the same thing. My feeling is that AGI is 10 years away, certainly no more than 20. That's coming from a pure computing power and modeling perspective, for example just by putting an AI in a simulated environment and letting it brute force the search space of what we're requesting of it with 10,000 times more computing power than anything today. Finding a way for an AI to focus its attention at each level of its capability tree in order to recruit learned abilities, and then replay scenarios multiple times before attempting something in real life, are some of the few remaining hurdles and not particularly challenging IMHO.

The real problem though (as I see it) is that the vast majority of the best and brightest minds in our society get lost to the demands of daily living. I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago. I think I'm hardly the exception. Without some kind of exit, winning the internet lottery basically like Elon Musk, we'll all likely see AGI come to be sometime in our lifetimes but without having had a hand in it.

And that worries me, because if only the winners make AI, it will come into being without the human experience of losing. I sense dark times looming, should AI become self-aware in a world that still has hierarchy, that still subverts the dignity of others for personal gain. I think a prerequisite to AGI that helps humanity is for us to get past our artificial scarcity view of reality. We might need something a little more like Star Trek where we're free of money and minutia, where self-actualization is a human right.

iotb
Entrenched mindsets don't like for the flaws in their views to be highlighted. It's one of Humanity's most serious flaws. As far as AGI being 10-20 years away based purely on compute power, you can't make this statement accurately unless you have a firm understanding of the underlying algorithms that power human intelligence and by extension AGI. From there, you also need to have a formal education and deep industry experience with Hardware to know what its capabilities are today, what they will be in the future roadmap wise, and how to most efficiently map an AGI algorithm to them. I'd say that 0.1% of people have this understanding and nobody is listening to them.

> The real problem though (as I see it) is that the vast majority of the best and brightest minds in our society get lost to the demands of daily living. I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago. I think I'm hardly the exception. Without some kind of exit, winning the internet lottery basically like Elon Musk, we'll all likely see AGI come to be sometime in our lifetimes but without having had a hand in it.

They don't get lost so much as they become trapped for reasons due to systematic and flawed optimization structures found throughout society. All is not lost if one breaks out long enough to realize they can make certain pursuits if they are willing to make a sacrifice. The bigger the pursuit, the bigger the required sacrifice. Not many people are willing to do that in the valley when you have a quarter of a million dollar paycheck staring you in the face. You could of course make a decision to sacrifice everything one given day and you'd have 5 years of runway easily if you saved your money properly. Obviously, VC capital wont fund you. Obviously universities aren't the way to go given the obsession with Weak AI. Obviously no AI group will hire you unless you have a PhD and/or are obsessed with Weak AI. Obviously you might not even want this as it will cloud your mind. So, clearly, the way to make ground breaking progress is to walk off your job, fund a stretch of research yourself, and be willing to sacrifice everything. Quite the sacrifice? People will laugh at you. What happens if you fail? Socially, per the mainstream trend, you'll fall behind. If you have a partner, this will be even more difficult as the trend is to get rich quick, get promoted to management, buy a million dollar home, have kids, stay locked in a lucrative position at a company. And what of your pride? Indeed.. And therein is the true pursuit of AGI.

The winners are pushing fundamentally flawed AI techniques because it requires massive amounts of data and compute which is their primary business model. They wont succeed because they are optimizing a business model that is at the end of its cycle and not optimizing the pursuit of AGI.

AGI is coming and it is completely out of the scope of the current winners. If a person desires to pursue and develop AGI, they'd have to be bold enough to sacrifice everything... It's how all of the true discoveries are made for all of time and science. Nothing has changed but for reasons due to money primarily, when the historical learning lessons are far off enough people attempt to re-tell/re-invent the wheel in their favor.. Only to be reminded : Nothing has changed.

The individual discoverers change over time however for they learn from history.

zackmorris
Well what I'm saying is that we can derive your first paragraph purely with computing power. What we need are computers with roughly 100 billion cores, each at least capable of simulating a neuron (maybe an Intel 80286 running Erlang or similar), and a simple mesh network (more like the web) that's optimized for connectivity instead of speed. This is on the order of 100,000*100,000,000,000 = 1e16 transistors, or about 7 orders of magnitude more than an Intel i7's billion transistors. It would also be running at at least 1 MHz instead of the 100 or 1000 Hz of the human brain, so we can probably subtract a few orders of magnitude there. I think 10,000 times faster than today is reasonable, or about 2 decades of Moore's law applied to video cards.

Then we feed it scans of human minds doing various tasks and have it try combinations (via genetic algorithms etc) until it begins to simulate what's happening in our imaginations. I'm arguing that we can do all that with an undergrad level of education and understanding. Studying the results and deriving an equation for consciousness (like Copernicus and planetary orbits) is certainly beyond the abilities of most people, but hey, at least we'll have AGI to help us.

Totally agree about the rest of what you said though. AGI <-> sacrifice. We have all the weight of the world's 7 billion minds working towards survival and making a few old guys rich. It's like going to work every day to earn a paycheck, knowing you will die someday. Why aren't we all working on inventing immortality? As I see it, that's what AGI is, and that seems to scare people, forcing them to confront their most deeply held beliefs about the meaning of life, religion, etc.

iotb
You're focusing on an aspect of Neurons in which there isn't even an accurate understanding and attempting to make a direct mapping to computer hardware. This is framing w/o understanding and you should be able to clearly understand why you can't make analysis or forward projections based on it.

Video cards operate on a pretty limited scope of computing that might not even be compatible with Neuron's fundamental algorithm. The only thing SIMD has proven favorable towards is basic mathematics operations with low divergence which is why Optimization algorithm based NN function so well on them.

This is the entrapment many people in the industry fall for. The first step towards AGI is in admitting you have zero understanding of what it is. If one doesn't do this and simply projects their schooling/thinking and try to go from there, you end up with a far shorter accomplishment.

You can't back derive aspects of this problem. You have to take your gloves off and study the biology from the bottom up and spend the majority of your time in the theoretical/test space. Not many are willing to do this even in the highest ranking universities (Which is why I didn't pursue a PhD).

There is far too little motivation for true understanding in this world which is why the majority of the world's resources and efforts are spent on circling the same old time test wagons.. Creating problems then creating a business model to solve it. We are only fooling ourselves in this mindless endeavors. When you break free long enough, you see it for what it is and also see the paths towards more fundamental pursuits. Such pursuits aren't socially celeberated or rewarded. So, you're pretty much on your own.

> As I see it, that's what AGI is, and that seems to scare people, forcing them to confront their most deeply held beliefs about the meaning of life, religion, etc.

One thing about this interesting Universe is that when a thing's time has come it comes. It points to a higher order of things. There's great reason and purpose to address these problems now and its why AGI isn't far off. If you look at various media/designs, society is already beckoning for it.

zackmorris
You know, I find myself agreeing with pretty much everything you've said (especially limitations of SIMD regarding neurons etc). I'm kind of borrowing from Kurzweil with the brute force stuff, but at the same time I think there is truth to the idea that basic evolution can solve any problem, given enough time or computing power.

I guess what I'm getting at, without quite realizing it until just now, is that AI can be applied to ANY problem, even the problem of how to create an AGI. That's where I think we're most likely to see exponential gains in even just the next 5-10 years.

For a concrete example of this, I read Koza's Genetic Programming III edition back when it came out. The most fascinating parts of the book for me were the chapters where he revisited genetic algorithm experiments done in previous decades but with orders of magnitude more computing power at hand so that they could run the same experiment repeatedly. They were able to test meta aspects of evolution and begin to come up with best practices for deriving evolution tuning parameters that reminded me of tuning neural net hyperparameters (which is still a bit of an art).

Thanks for the insight on higher order meaning, I've felt something similar lately, seeing the web and exponential growth of technology as some kind of meta organism recruiting all of our minds/computers/corporations.

gdb
(I gave the talk.)

> I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago.

For a bit of optimism: if you are a good software engineer, you can become a contributor in a modern AI research lab like OpenAI. We hire software engineers to do pure software engineering, and can also train them to do the machine learning themselves (cf https://blog.openai.com/spinning-up-in-deep-rl/ or our Fellows and Scholars programs).

As one concrete example, though not the norm, our Dota team has no machine learning PhDs (though about half had prior modern ML experience)!

None
None
marvin
Just for posterity, to see if I was completely bonkers in 2018: I believe that it is possible to realize AGI today, using currently available hardware, if we just knew enough of the principles to create the right software.

Novel computational concepts have often been demonstrated on very old hardware, with the full knowledge of the tricks required to make it work. Often, more powerful hardware was required in order to pionéer the technology, and often the proof-of-concept on older hardware is too slow and clunky to have been a compelling product. But it's often been physically possible for longer than people realize.

I've never made a "long game" statement like this before, so it'll be interesting to read this comment about what I thought before in 2038 or 2048, if it still exists then.

iotb
It mainly comes down to the software and algorithmic techniques. It's not something many people want to hear as there is a lot of money and effort placed on an alternative framing. If you believe it takes massive amount of hardware/compute, it favors the current billion dollar lucrative business of cloud computing at scale. If you are made to believe it takes 100s of top ranking PhDs because its so incredibly complex, you are more likely to value the groups pursuing it higher. If the techniques and white papers are all a convoluted mess of spaghetti, one becomes of the belief that this is an unworldly pursuit.

It is none of these things however. The hardware is already capable. The fundamental techniques that are popular and theories are the flawed. Essentially, you need to start over as Geoffrey Hinton said. Something no one wants to put effort into doing or fund.

So, indeed AGI is here today. It's just not within the frame of the populist efforts. An capable individual with the freedom of doing deep pondering and constructions w/o any outside influence is more likely to crack this puzzle than a room of PhDs all systemically subscribed to the same fundamentally flawed approach of Optimization algorithms.

As far as the compute goes, does anyone even truly spend the time to understand it anymore in this day and age of frameworks on frameworks? I mean truly understand it? And there lies to other fundamental problem. How can one make broad statements about the computational requirements of AGI when the most they know about the underlying hardware is an AWS config file?

Swaths of the industry have shut themselves out from ever developing AGI. Sadly, they're the groups w/ the most funding and backing because they represent the same flawed mainstream ideology as every other AI group.

It will be an outsider and the core theoretical approach is already resolved.

andbberger
> Essentially, you need to start over as Geoffrey Hinton said.

Oh come on, he says this every couple of years... Bengio made a meme about it... [1]

[1] https://www.youtube.com/watch?v=mlXzufEk-2E

iotb
He says it. Others have done it years ago.
chongli
It mainly comes down to the software and algorithmic techniques.

I'm not convinced. The architecture of a computer, with its extremely fast CPU cores and its extreme bottleneck of a memory bus with a tiny cache set up in a hierarchy is radically different from the architecture of the human brain.

The brain is parallel on an unimaginably larger scale than anything we've ever built. The brain also doesn't put a wall between "storage" and "compute." I think there are tons of problems the brain solves easily with parallelism that would be bottlenecked by memory latency on a computer.

andbberger
I don't see how you could arrive at that conclusion with any certainty.

There is a computability argument - part of your statement implies the assumption that 'AGI' is computable. I would be inclined to agree in this aspect, I think a preponderance is required before we start seriously considering that intelligence is uncomputable. After all, last time I checked the jury was still out as to whether super-Turing machines are physically realizable.

So let's suppose for the sake of argument intelligence is computable. Well then in principle 'AGI' has been 'realizable' since mid-last century when we first constructed Turing machines.

With the caveat that it may take the lifetime of the universe to classify an image of a cat.

It is quite conceivable that, although Turing-computable, 'AGI' requires taking expectations over such large spaces that it won't really be 'realizable' until hardware improves by another few orders of magnitude.

drcode
I think it's clear an important element of AGI simply involves traversing large search spaces: This is essentially what deep neural nets are doing with their data as they train and return classification results... and it's not unreasonable to think they can already do this with a performance on par with the human brain.

The problem is that there's a lot of additional "special sauce" we're missing to turn these classification engines into AGI, and we don't have a clue if this "special sauce" is computationally intensive or not. I'm guessing the answer is "no" since the human cortex seems so uniform in structure and therefore it seems to me it is mostly involved in this rather pedestrian search part, not the "special sauce" part.

(disclosure: I'm not a neuroscientist or AI researcher)

andbberger
Actually I think there's good reason to think the opposite; that the "special sauce" missing in current machine learning approaches will in fact be very computationally intensive.

I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty

A proper Bayesian approach to uncertainty will be akin to an expectation over models; that's an extra order of magnitude.

Feedback is also likely to be expensive. Currently all we really know how to do is 'unroll' feedback over time and proceed as normal for feedforward networks.

Also keep in mind that inference and training are pretty different things; inference is fast, but training takes a lifetime.

> and it's not unreasonable to think they can already do this with a performance on par with the human brain.

I disagree for the reasons stated above; we know we're missing a big piece because of the lack of feedback and modeling of uncertainty.

I do happen to be a neuroscientist and an ML researcher... but I think that just means I am slightly more justified in making wild prognostications... which is totally what this is... But ultimately I'm still just some schmuck, why should you believe me?

Nothing I have said in this comment can be considered scientific fact; we just don't know. But I have a feeling...

naasking
> I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty

You'll be interested in a recent post about a new paper for an AI physicist then: https://news.ycombinator.com/item?id=18381827

marvin
Yes, there is of course nothing certain about my statement -- it is a purely speculative guess based on the computational power of hardware available in the world today, and that we are already having some success at using biologically-inspired techniques for classification, which constitutes part of acting intelligently in a complex environment. Sometimes, it's fun to make wild guesses.

Note that there's nothing in this guess about the scale required, but I'm imagining something smaller than the Manhattan project. Maybe something on the scale of a whole AWS datacenter or something like that.

An important factor is the computational power needed to emulate the essence of an intelligent entity that exists in the physical world today (i.e. the human brain & sensory system), as you're getting at in your comment about hardware capability.

(That a human-equivalent or better machine for performing complex tasks in the real world is realizable with Turing-computable algorithms seems more or less guaranteed to me, unless there are physical processes happening in biological humans that are not).

andbberger
> (That a human-equivalent or better machine for performing complex tasks in the real world is realizable with Turing-computable algorithms seems more or less guaranteed to me, unless there are physical processes happening in biological humans that are not)

'Guaranteed' sounds a little too strong to me. We just don't know yet.

Also...

> we are already having some success at using biologically-inspired techniques for classification

I presume you are referring to CNNs, which in good-faith can only very _very_ loosely be characterized as biologically-inspired. People are constantly trying to draw comparisons (see the 'look the brain be like CNNs' papers that show up at NIPS every year) and I wish they would stop... certainly there are some superficial similarities but... it's more a case of convergent evolution than anything, there is no deep connection to be unraveled...

I don't think there's any deep insight belying CNNs beyond 'wow building your data's symmetries into your model works great!'. Which is like a pretty obvious and fundamental thing imo...

marvin
> 'Guaranteed' sounds a little too strong to me

I did say 'practically guaranteed'. Let's say (for the sake of the argument) there's less than 5% probability there's something yet undiscovered about animal brains/nervous systems, that has qualitatively different computational properties than the current theory of computation says is possible in the physical world.

Does this sound excessively optimistic to you? I would be delighted to be proved wrong on this; it would be like the theory of relatively suddenly shook up Newtonian physics, in the computational domain.

But I'm not aware of any evidence that our models of computability have any holes. If you know of any, please let me know! I am a curious skeptic at heart :)

Florin_Andrei
I think we're still lacking the conceptual framework, the grand scheme if you will.

AGI probably requires some kind of hierarchy of integration, and current AI is only the bottom level. We probably need to build a heck of a lot of levels on top of that, each one doing more complex integration, and likely some horizontal structure as well, with various blocks coordinating each other.

marvin
I wonder what this would look like? What would be some ways one could connect different types of AI systems in a loose and fuzzy way so that they can use each others' output in a meaningful way?

I like this line of reasoning, you're basically stating that we have found effective ways of emulating some of the sensory parts of a central nervous system. Which seems intuitively right; we can classify things and use outputs from this classification in some pre-determined ways, but there's no higher-level reasoning.

ilaksh
There are many hierarchical AI systems though. For example Hierarchical Hidden Markov models.

Pretty sure Deep Mind made a RL system a bit like that.

Also reminds me of Ogmai AIs technology.

Florin_Andrei
Well, a scooter and the Falcon Heavy are both "vehicles", but that doesn't mean the scooter will ever do what the rocket does.
jbattle
That's my intuition as well. I'd expect AGI to arise once we put together enough various tools into an integrated whole. So for classification of sensory input, you'd build in a CNN or the like. You'd build in something else (RNN?) for NLP. You'd use something else for planning (which might also work as the highest level controller determining "what sort of tool should I apply to solve goal X?"
selestify
> Novel computational concepts have often been demonstrated on very old hardware

What are some examples of this?

bgrayland
Demoscene output?
zshrdlu
You're essentially restating the Physical Symbol System Hypothesis, formulated by the founding fathers of AI (Herbert Simon and Allen Newell). http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/PublishedPap...
s1dechnl
You're not bonkers. Hardware is already at capability especially in scaled form. It largely comes down to a cohesive and broad ranging software package. This interestingly goes against the profit motivations of various groups which is why you don't hear it framed as much. Of course, when the current hype cycle is tapped, you'll start hearing the rhetoric change.
Ngunyan
I am of the same opinion..
mbrock
I basically think software intelligence will not resemble human intelligence until it is embodied in the same body and social upbringing as humans—like, I don’t think you can actually fake it.

I think you can make a software intelligence that is very intelligent in its own kind of strange alien way. But as far as involving it in human concerns, it will not really understand the human world, even if we try to make it seem like it does.

A robot lawyer might be very intelligent, but it will fail Turing tests. Trying to make it pass Turing tests without giving it a human body and upbringing will amount to patching a fundamentally false system and hoping for temporary illusions.

This opinion is informed by arguments made by Hubert Dreyfus stemming ultimately from a Heideggerian perspective.

nwah1
But when people talk about a "superintelligence" they are not talking about creating an AI lawyer. They are talking about creating a recursively self-improving intelligent system that attempts to make use of all available resources as efficiently as possible in service of its utility function.

You could envision a very intelligent but socially retarded system able to produce products more cheaply than anyone else, and thereby engage in trade and acquire resources. You could also just envision a system that ignores human morality and simply appropriates matter, space, and energy for its own purposes irrespective of what humans consider to be their property.

As such, the Turing Test is fairly irrelevant.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.