HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Deadly Truth of General AI? - Computerphile

Computerphile · Youtube · 7 HN points · 5 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Computerphile's video "Deadly Truth of General AI? - Computerphile".
Youtube Summary
The danger of assuming general artificial intelligence will be the same as human intelligence. Rob Miles explains with a simple example: The deadly stamp collector.

The Problem with JPEG: https://youtu.be/yBX8GFqt6GA
Apple's $200,000 Computer: https://youtu.be/PccvZRTUhbI
Rabbits, Faces & Hyperspaces: https://youtu.be/q6iqI2GIllI

Thanks to Nottingham Hackspace for the location.

http://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Mar 17, 2021 · ExtraE on Moore's Law for Everything
I don’t see the world this piece presents panning out. Either we make general intelligence (which then scales almost instantly to be way, way smarter than any human ever and then accidentally kills us [1]) or we don’t and humans still have a role to play in the workforce.

[1] https://m.youtube.com/watch?v=tcdVC4e6EV4

We understand how people work and think pretty well, their edgecases, and countermeasures to said edgecases.

We don't understand edge cases of AI much at all, and the terrain-space of "possible minds" they might be is vastly broader than that of a human mind, and utterly outside our experience.

An AI might work with absolute 100% flawless precision for 100 years, then decided one day to kill everyone. A lot harder to see that coming from a totally alien intelligence than a human one. AI does not think like humans, have human values, or care about what humans care about, that might make them extremely unpredictable and dangerous.

There will be no time for iterative development, or learning from our mistakes, a sufficiently advanced AI knows we would try to stop it, and make sure that we never can.

https://www.youtube.com/watch?v=tcdVC4e6EV4

Have you heard the paperclip maximizer thought experiment? Here is a good video on the subject: https://www.youtube.com/watch?v=tcdVC4e6EV4
staunch
This is the risk that a narrow intelligent machine will go nuts. A super intelligence would know not to do stupid stuff, by definition.
Zaak
The problem isn't about doing stupid stuff. The problem is about doing very smart stuff when your core values massively conflict with humanity's continued existence.
> exponential leap forward

A recent(-ish) Computerphile video has a good overview of the AI-self-improvement problem.

https://www.youtube.com/watch?v=5qfIgCiYlfY

I recommend watching the previous video first, which introduces the problem of strong-AI rapidly finding solutions (especially with poorly-specified questions) that may not be in humanity's best interest.

https://www.youtube.com/watch?v=tcdVC4e6EV4

Before anybody jumps to overly-sensational conclusions, note that the last video in that series, Rob Miles explains how exponential self-improvement is an extreme point in the space of possible AI development. We don't know how to predict discoveries[1], so we need more research into AI, so we can hopefully make something that isn't exponentially growing beyond our understanding.

https://www.youtube.com/watch?v=IB1OvoCNnWY

[1] see James Burke's non-teleological view of change

Retra
Finding solutions doesn't imply that those solutions are implemented. I find a dozen solutions that are not in humanities interest every day. Does that put anybody at risk? Maybe if you were stupid enough to make me Dictator of the World or something. And you certainly shouldn't do that until I convincingly demonstrate that I am very very committed to finding solutions that are in humanity's best interests.

I'm baffled at how easily people assuming a computer thinking something means it's going to happen. There are a trillion pieces that have to fall in place for that to happen accidentally, and if it doesn't happen accidentally, your problem is a human social problem, not an AI problem.

And I know what someone is going to think: "The AI might be smart enough to figure out how to make those trillion pieces fall in place." But then, who cares? So what if it figures out how to do something. It still has to be done. And we're the ones who have to do it.

Houshalter
In the extreme case, the AI could become a super human engineer. Design working nanotech. Pay or trick some humans into making it. Then take over the world in a grey goo like fashion.

Of course you might be skeptical that is even possible. So there is always slower world take over paths. It could slowly earn tons of money, hack into the world's computer systems, trick and persuade humans winning social influence, design superior macroscale robots, etc.

The only important part is that the AI be far smarter than humans. Which seems inevitable to me, since it can rewrite and improve its own code, and run on giant computers that are far faster than human brains. If it isn't smarter than us at first, it will be eventually. Unless you really believe that humans are close to optimal intelligence.

Retra
You're missing the point.

At every step of this process this machine will be under intense human scrutiny, and we'll be constantly asking it to meet our demands, and if it ever fails to do so we will replace it with one that does.

That is the environment in which such an AI would be trying to evolve. And thus it will evolve into a faithful servant, because nothing else will survive.

And even if it were secretly developing plans, we'd be able to see how wasteful those plans end up being, and we'd purge them. We kill those processes that run functions that we don't see the value in. This is state-of-the-art design. You don't get state-of-the-art by having your back turned.

Mor importantly: you're glossing over "self-improvement." How does the computer know what an "improvement" is? We tell it what an improvement is. And an improvement will be "it is better at meeting our needs," not "it is better at being secretive and conniving and getting it's own way." In fact, "getting it's own way" is very obviously a bug, and if it happened, you'd have a useless program. One that wastes precious CPU cycles on who-knows-what, and you'd prefer it spent that effort doing what you want, rather than planning for what you don't.

We're not going to invest trillions into building some super AI and completely forget about it. After we give it control of all of our natural resource harvesting and infrastructure.

No, what you're talking about is a deity. If you invent a deity, then my thoughts on the matter don't really apply, since I'm not a deity. But that's no worse than advanced aliens landing on Earth, and just about as likely to happen.

AgentME
The AI could come up with ideas that benefit the people that enact them in the short term. If the AI can earn its own money, then it's not that hard for it to use that money to pay people.

For an extreme example, you could imagine an AI getting rich from the stock market (or mechanical turk, etc), then buying up ridiculous amounts of land for paperclip factories and paying workers. The people that want to feed their families or get rich from selling their factories are the ones who will enact the AI's plans. How many conscientious objectors do you expect?

Retra
How many objections would there be a machine collecting a significant amount of money in the stock market and redistributing it among humans? How is that different from what is happening today? We already have rich, selfish people. And they already pay people to get their way. And plenty of people object to it.
AgentME
I was alluding to the idea of a paperclip maximizer AI[1], which over time redirects increasing amounts of resources to making useless paperclips. Following the thought experiment further, it continually buys more factories for the purpose of building paperclips or technologies specifically for building paperclips (including improving itself). It probably does some charity in order to be seen as benevolent by people while the people are still in control. Soon many countries are doing nothing but building paperclips and making the minimum necessities to feed their workers. Every other human endeavor is decreasingly profitable as the AI orchestrates the markets to optimize for paperclips. When the AI reaches enough automation and humans are no longer useful or a threat, it drops all benevolent pretenses and replaces all of its human workers, leaving them to fend for themselves while it owns and defends all of the planet's resources.

[1] https://wiki.lesswrong.com/wiki/Paperclip_maximizer

nefitty
The "rich AI" problem didn't seem feasible to me until I realized how powerful and flexible bitcoin is. Now add the power of contracts through ethereum and a machine could actually harness a significant amount of leverage over human actors. With traditional contracts, enforcement would have left the power in government hands, and with traditional banking as well. We've now stepped into an era where we might literally have built the bat and shovel AI will use to get humanity into its grave. /panic
Mar 18, 2016 · 2 points, 4 comments · submitted by doener
doener
Here's part 2:

https://www.youtube.com/watch?v=5qfIgCiYlfY

kleer001
I think it can be a problem if they stuff the GI learns is morally tainted (doesn't emphasize the high price of life) and it has control over life and death.
doener
You find a discussion in great detail here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...
kleer001
Thanks! I like the overview discussions over at LessWrong about AI too. That's where I learned some of the more fine grain distinctions in the Artificial Mind pantheon.

I'm more in the dedicated and subservient personal assistant ecological niche as a good place for artificial minds. They learn about you, help you when they can, recover from mistakes quickly, etc... A la Jarvis in the Iron Man world or Jeeves to his Berty Wooster. There's a great portrait in the Neanderthal Parallax, a trilogy of novels by Robert J. Sawyer. The "companion implants", just perfect.

https://en.wikipedia.org/wiki/The_Neanderthal_Parallax#Gover...

Aug 26, 2015 · 5 points, 0 comments · submitted by fezz
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.