HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Superintelligence

Nick Bostrom, Napoleon Ryan · 3 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Superintelligence" by Nick Bostrom, Napoleon Ryan.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful—possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? This profoundly ambitious and original audiobook breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Jan 22, 2021 · fossuser on Still alive
AGI = Artificial General Intelligence, watch this for the main idea around the goal alignment problem: https://www.youtube.com/watch?v=EUjc1WuyPT8

They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).

It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality

For more reading about AGI:

Books:

- Superintelligence (I find his writing style somewhat tedious, but this is one of the original sources for a lot of the ideas): https://www.amazon.com/Superintelligence-Dangers-Strategies-...

- Human Compatible: https://www.amazon.com/Human-Compatible-Artificial-Intellige...

- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...

Blog Posts:

- https://intelligence.org/2017/10/13/fire-alarm/

- https://www.lesswrong.com/tag/artificial-general-intelligenc...

- https://www.alexirpan.com/2020/08/18/ai-timelines.html

The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.

For an example of why this is hard: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden... and probably the most famous example is the paperclip maximizer: https://www.lesswrong.com/tag/paperclip-maximizer

vmception
Great yeah that sounds like something I wish I knew existed

Its been very hard to find people able to separate their emotions from an accurate description of reality even if it sounds like a different political tribe, or moreso that people are more willing to assume you are part of a political tribe if some words don't match their political tribe’s description of reality even if what was said was most accurate

I’m curious what I will see in these communities

fossuser
I recommended some of my favorites in another comment: https://news.ycombinator.com/item?id=25866701

I found the community around 2012 and I remember wishing I had known it existed too.

In that list, the less wrong posts are probably what I'd read first since they're generally short (Scott Alexander's are usually long) and you'll get a feel for the writing.

Specifically this is a good one for the political tribe bit: https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of...

As an aside about the emotions bit, it’s not so much separating them but recognizing when they’re aligned with the truth and when they’re not: https://www.lesswrong.com/tag/emotions

You're not the only one who finds it scary, as there are massively popular books on the topic..

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

fossuser
I found it difficult to make it through the first couple of chapters.

Having just read The Gene, his analysis of the artificial selection option for super intelligence came across as very wrong - underestimating the complexity of polygenic traits and making (likely inaccurate) assumptions about their heredity. This type of thing has a dangerous history.

The idea that a super intelligence could emerge from the internet unexplained also seemed pretty weak, but the first issue was so bad I found it hard to take anything else seriously (didn't trust the author's analysis).

There are interesting issues with artificial consciousness, but I think they're in some ways similar to the issues with biological consciousness - the testing data the neural net is exposed to and it's underlying model can lead to minds that wouldn't be considered intelligent (and dangerous outcomes as a result).

briga
I would suggest giving it another go. There are few authors who have given this issue as much thought as Bostrom, and if the conclusions he draws are false at the very least that opens up the door for further conversation about the subject.
The purpose of the profile isn't to argue a risk exists. We largely defer to the people we take to be experts on the issue, especially Nick Bostrom. We think he presents compelling arguments in Superintelligence, and although it's hard to say anything decisive in this area, if you think there's even modest uncertainty about whether AGI will be good or bad, it's worth doing more research into the risks.

If you haven't read Bostrom's book yet, I'd really recommend it. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

None
None
manish_gill
I read this book. Got about 4 chapters in before I had to give up at the sheer ridiculousness of the whole thing. The problem with this entire line of reasoning is that at this point it is nothing more an a thought experiment. Many of the key underlying assumptions that are required for Artificial General Intelligence simply have not been realised, and while Weak AI is progressing at a strong rate, we are hardly anywhere close to a place where we should start worrying about all this stuff.

It's about as likely as a meteor hitting the planet and wiping out the entire human life. Possible? Sure. Should I be panicking about this right now? Nah.

The book is nonsense. I'll start paying attention when someone who has real experience in the field of AI research (and I'm not talking charlatans like Yudkowsky here), but someone like say Norvig come out and say it's a reasonable concern today.

derefr
> we are hardly anywhere close to a place where we should start worrying about all this stuff

Here's a comparison: wouldn't it be great if we had started thinking about climate change way back at the beginning of the industrial revolution, before we decided to create tons of open-air coal plants?

There are reasons to solve problems before they're problems.

manish_gill
While I agree that there should most definitely be general research in the area, and it is of course something worth exploring, what I don't agree with is the hype that is being built around this. Hollywood has played its part of course, but people who should really know better seem to be under the impression that as soon as we achieve AGI (which is a big /if/), we're doomed immediately. They ignore that all this is still academic and while industry practices follow sooner than is usual these days, it's not like one day a scientist creates AI and the next day, Skynet launches Nuclear Missiles.

So is it worth exploring? Sure. But I'm not going to be concerned about something for which there is zero evidence to support it.

There are no reasons to solve problems before there is foolproof evidence that they are indeed problems.

BenjaminTodd
The expert consensus says 10% chance of human-level AI in 10 years: http://www.givewell.org/labs/causes/ai-risk/ai-timelines

Many computer science professors have publicly said they think AI poses significant risks. There's a list here: http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...

Also see this open letter, signed by hundreds of experts: http://futureoflife.org/ai-open-letter/

manish_gill
Your last link, the open letter, says nothing about human or > human level AI. Just "robust and beneficial usage" of AI. In all likelihood that means the current AI technology and the letter is aimed at (I'm assuming) people who are trying to use these techniques in things such as modern weapon systems. While that is of course a concern, it's not the same as the concern for a Skynet like scenario.

Some experts in the second link you gave are concerned, sure. But I can probably find an equal number who dismiss it as well. There isn't a clear consensus over AGI. I still remain skeptical. Same with your first link, which tries to "forecast" AGI. People can't forecast next month's weather correctly, so forgive me for not believing in a 10% chance in 10 years.

Actually, I take back my appeal to authority argument in its entirety, because I just remembered the first thing I saw in my AI class was a video of experts claiming the exact same thing. The video was from the 50s.

EDIT: Found it: https://www.youtube.com/watch?v=rlBjhD1oGQg

None
None
BenjaminTodd
The letter isn't (just) about modern weapon systems. It was put together by this group: http://futureoflife.org/ai-news/

Also, no-one is worried about a skynet scenario. The worrying scenario is just any powerful system that optimises for something that's different from what humans want.

Second, the point is that even uncertainty is enough for action. For AI to not be a problem, you'd need to be very confident that it'll occur a long way in the future, and that there's nothing we can do until it's closer. As you've said, we don't have confidence in the timeline. We have large uncertainty. And that's more reason for action, especially research.

Consider analogously:

"We've got no idea what the chance of run-away climate change is, so we shouldn't do anything about it."

Seems like a bad argument to me.

mxfh
The worrying scenario is just any powerful system that optimizes for something that's different from what humans want.

Like AI based-HFT? Which humans want what? I'm pretty sure, that if there's one constant, that there always will be some humans on either side of the argument.

andreyf
Also, no-one is worried about a skynet scenario. The worrying scenario is just any powerful system that optimises for something that's different from what humans want.

So Transcendence in leu of The Terminator. It's still hollywood fiction.

None
None
Moshe_Silnorin
Global warming will not happen because The Day After Tomorrow is a movie. 12 Monkeys is a movie therefore weaponized biotechnology is without risk. "Hollywood made a movie vaguely like this" isn't much of an argument against anything.
andreyf
Don't straw man me, bro.

My point is that the entire idea of an buggy AI destroying humanity is absolutely divorced from the reality of AI research. It's not unimaginable to the lay-person, but there is simply no way of getting there from the neural networks we're making now.

To begin, can you go ahead and explain what self-awareness means in terms of a neural network you are training?

manish_gill
Except, in the case of climate research, we have a plethora of evidence of the possible harmful effects and we can see it happening today. Everything about the potential harmful effects of AI is pure conjecture, because there is no human-level general intelligence AI system that exists today. It's, once again, something philosophers like Bostrom will make a career out of.

I'm 100% with Torvalds on this when he laughs at the prepostorous notion that AI will become a doomsday scenario. I think it'll become more and more specialised, branch out to other fields and become reasonably good. But there's a huge leap to go from there to HGI.

> any powerful system that optimises for something that's different from what humans want.

Except this notion rests on the premise that humans will not be in full control, which leads to the exponential growth argument which leads back to the Skynet like scenario.

> the point is that even uncertainty is enough for action

And that action is...what exactly? People won't stop building intelligent systems. There is no real path that we have from where we are to HGI, so it's not like researchers have a concrete path. Just what exactly does this research look like?

> "We've got no idea what the chance of run-away climate change is, so we shouldn't do anything about it."

Extremely Poor analogy. We have decades worth of concrete data that tells us the nature and reality of climate change. We demonstrably know that it's a threat. Can you say the same about AI?

Also, I'll take the time to re-iterate how heavily skeptical I remain of groups like MIRI that are spearheaded by people who don't believe in the scientific method, believe in stuff like cryogenics, have history of trying to profit off of someone else's copyrighted material and have someone managed to convince a whole lot of people that donating them is the best way to fight off the AI doomsday scenario. People should do their research before linking to stuff like that. :(

derefr
> And that action is...what exactly?

I would guess A. the development of "known safe" high-level primitives, and B. a coupling of education in the craft of AI with education in the engineering of AI.

The "profession" of developing strong, general-purpose AI should basically look like a cross between cryptography and regular old capital-e-Engineering: like crypto, you'd constantly hear important things about "not rolling your own" low-level AI algorithms; and, like Engineering, the point of the job would be making sure the thing you're building is constructed so that it doesn't "exceed tolerances."

The research goals in the field of "friendly AI" are thus twofold:

• work in understanding possible computational models for minds, to understand what sort of tolerances there can be—what knobs and dials and initial conditions each type of mind comes with;

• and work in development of safe high-level primitives for constructing minds.

Both of these are the subjects of active papers. Note that these can both also be described as plain-old "AI research"; they're not just abstract philosophy, these papers are steps toward something people can build. The research within the subfield of "friendly" AI just has different criteria for what makes for a "promising avenue of research", e.g. ignoring black-box solutions because there are no knobs for humans to tweak.

> MIRI ... spearheaded by

MIRI is two things, and it's best to keep them mentally separate. It's a research organization, like Bell Labs—and it's a nonprofit foundation that funnels money into that research organization.

Yudkowsky is the director (head cheerleader) of the nonprofit. He doesn't really touch the research organization. The research org will succeed or fail on its merits (mostly whether it hires good researchers), but the leadership of the nonprofit has not-much to do with that success or failure, any more than AT&T had any impact on the success or failure of Bell Labs. You can believe in the research org known as MIRI even if you actively distrust Yudkowsky.

apsec112
"have history of trying to profit off of someone else's copyrighted material"

That's quite a strong claim - could you provide evidence for it? Are you referring to HPMOR? HPMOR has always been given away for free, both online and in print, and J.K. Rowling has explicitly allowed non-commercial Harry Potter fanfics.

manish_gill
He was eventually dissuaded because a vocal majority of HP fans were against it (because JKR has only allowed non-commercial use). But here is the original announcement I found from a forum -

The Singularity Institute for Artificial Intelligence, the nonprofit I work at, is currently running a Summer Challenge to tide us over until the Singularity Summit in October (Oct 15-16 in New York, ticket prices go up by $100 after September starts). The Summer Challenge grant will double up to $125,000 in donations, ends at the end of August, and is currently up to only $39,000 which is somewhat worrying. I hadn't meant to do anything like this, but:

I will release completed chapters at a pace of one every 6 days, or one every 5 days after the SIAI's Summer Challenge reaches $50,000, or one every 4 days after the Summer Challenge reaches $75,000, or one every 3 days if the Summer Challenge is completed. Remember, the Summer Challenge has until the end of August, after that the pace will be set. (Just some slight encouragement for donors reading this fic to get around to donating sooner rather than later.) A link to the Challenge and the Summit can be found in the profile page, or Google "summer singularity challenge" and "Singularity Summit" respectively.

BenjaminTodd
I'm not saying we don't know whether climate change poses a tail risk or not (it obviously does). I'm just saying that claiming uncertainty isn't a good reason to avoid action.

In general, if there's a poorly understood but potentially very bad risk, then (a) more research to understand the risk is really high priority (b) if that research doesn't rule out the really bad scenario, we should try to do something to prevent it.

With AI, unfortunately waiting until the evidence that it's harmful is well established is not possible, because then it could be too late.

What AI risk research could involve is laid out in detail in the link.

argonaut
"poorly understood but potentially very bad risk" is something you could say about the risk of an alien invasion.
HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.