HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety

Eric Schlosser · 19 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety" by Eric Schlosser.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
The Oscar-shortlisted documentary Command and Control, directed by Robert Kenner, finds its origins in Eric Schlosser's book and continues to explore the little-known history of the management and safety concerns of America's nuclear aresenal. A myth-shattering exposé of America’s nuclear weapons Famed investigative journalist Eric Schlosser digs deep to uncover secrets about the management of America’s nuclear arsenal. A groundbreaking account of accidents, near misses, extraordinary heroism, and technological breakthroughs, Command and Control explores the dilemma that has existed since the dawn of the nuclear age: How do you deploy weapons of mass destruction without being destroyed by them? That question has never been resolved—and Schlosser reveals how the combination of human fallibility and technological complexity still poses a grave risk to mankind. While the harms of global warming increasingly dominate the news, the equally dangerous yet more immediate threat of nuclear weapons has been largely forgotten. Written with the vibrancy of a first-rate thriller, Command and Control interweaves the minute-by-minute story of an accident at a nuclear missile silo in rural Arkansas with a historical narrative that spans more than fifty years. It depicts the urgent effort by American scientists, policy makers, and military officers to ensure that nuclear weapons can’t be stolen, sabotaged, used without permission, or detonated inadvertently. Schlosser also looks at the Cold War from a new perspective, offering history from the ground up, telling the stories of bomber pilots, missile commanders, maintenance crews, and other ordinary servicemen who risked their lives to avert a nuclear holocaust. At the heart of the book lies the struggle, amid the rolling hills and small farms of Damascus, Arkansas, to prevent the explosion of a ballistic missile carrying the most powerful nuclear warhead ever built by the United States. Drawing on recently declassified documents and interviews with people who designed and routinely handled nuclear weapons, Command and Control takes readers into a terrifying but fascinating world that, until now, has been largely hidden from view. Through the details of a single accident, Schlosser illustrates how an unlikely event can become unavoidable, how small risks can have terrible consequences, and how the most brilliant minds in the nation can only provide us with an illusion of control. Audacious, gripping, and unforgettable, Command and Control is a tour de force of investigative journalism, an eye-opening look at the dangers of America’s nuclear age.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Perhaps you might want to read Command and Control, a book about the terrifying history of near misses in terms of nuclear weapons. It may well be true that nuclear weapons have reduced hot wars. But it can be equally true that had a few small things gone a different way, the world could have been subjected to an even more horrible nuclear war. And this is still very much the situation.

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

DennisP
Also worth reading: Daniel Ellsberg's book The Doomsday Machine.

Ellsberg's job was nuclear strategy, and most of the classified documents he snuck out of his office were related to that. He published the Pentagon Papers first because he felt that if he started with the nuclear papers, nobody would even care about the Vietnam stuff.

He gave the nuclear papers to his brother, who hid them in a garbage bag on the edge of the town dump. Then a storm washed away that whole corner of the dump. They spent a year trying to find them and finally gave up. Ellsberg wrote that his wife considered that a miracle from God, because he got a pass on the Vietnam leak but would have certainly spent the rest of his life in prison for the rest.

Until recently he didn't talk about this stuff because he couldn't substantiate it, but now, enough has been declassified that he could back up his claims, which are horrific. US nuclear strategy in the '50s and '60s included the destruction of every city over 25K people in Russia and China in response to surprisingly minor conventional provocation, and acceptance of the death by radiation of everyone in Europe.

An especially startling point was that the authority to launch nukes was not just at the top. Theater commanders could do it on their own. According to Ellsberg that is still the case today.

Have a read of Eric Schlosser's book, Command and Control

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

The ways we could end up in a nuclear war are frightening to the point of madness.

harryvederci
... and yet we haven't.

Every day since WW2, things could have gone wrong.

Every day, for 76.56 years. 918.79 months. 27965 days.

I think we'll be fine. And if I'm wrong, you probably won't be able to tell me! :)

Edit: not sure (and don't really care) why this got downvoted, but I meant it as a morale boosting thing, not as a probability calculation. Things could have gone horribly wrong so many times, but they haven't so far. That personally gives me hope for the future.

dpierce9
Surely the numerator is at least two which halves your odds. It is a bit motivated to start counting after they were last used rather than simply used divided by days available to use.

Either way this is frankly an insane way to reason. Nuclear weapons use doesn’t behave according to some law-like stochastic process.

abvdasker
I hope you're right, but this feels like an instance of the hot hand fallacy. In a historical sense, nuclear weapons haven't existed for very long. 76.56 years without a nuclear attack is less than the life expectancy of one US adult. It's too soon to tell.
busyant
> The ways we could end up in a nuclear war are frightening to the point of madness.

Haven't read the book, but I watched Dr. Strangelove when I was young(er).

What made the movie great to me was that it made me realize that, although there is a nuclear "safety net", it was designed and controlled by humans — flawed humans who don't always behave rationally and who may not act in the best interests of humanity.

moffkalast
It is not only possible for humans not behaving rationally, it is essential!
I'm reading Eric Schlosser's Command and Control right now (1). Housing becomes quite elastic in the face of bombing. Fred Iklé actually developed a formula based on WW2 Germany for RAND:

Fully compensating increase in housing density = (P1 - F) / (H2) - (P1 / H1)

* P1 = Population of city before destruction

* P2 = Population of city after destruction

* H1 = Housing units before destruction

* H2 = Housing units after destruction

* F = Fatalities

The tipping point seemed to be reached when about 70% of a city's homes were destroyed. That's when people began to leave en masse and seek shelter in the countryside.

(1) https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

Command & Control[0] details the stupid mistakes very well. There were many, many glitches.

[0] https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

On the contrary, the USA has accidently dropped nuclear missiles from planes, and have experience all kinds of other incompetence over the decades.

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

bostonsre
There are quite a few more safeguards in place now to prevent accidentally letting a missile reach over 30k feet above US soil.
Obligatory recommendation of the riveting book Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

ubermonkey
Glad I'm not the only one who immediately came into this thread to rec that book. It's fascinating and TERRIFYING.
Another similar book is Command And Control by Eric Schlosser.

It goes into a lot of detail about how nuclear weapons and their control systems evolved. Covers a number of accidents and near apocalypses along the way. One of my favorite books of the last decade, informative and very readable at the same time.

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

iab
Amazing book, although terrifying. In a similar vein is "The Doomsday Machine", Daniel Ellsberg. Reading this in conjunction with C&C will truly make you appreciate the insane run of luck we have had as a species in avoiding nuclear catastrophe (so far).
iab
Just came across this as well on today's fp:

https://futureoflife.org/background/nuclear-close-calls-a-ti...

If you take Command and Control[0] to be an authoritative source, then no I don't believe so. We've come close to accidentally launching on both sides but no combat deployment has occurred, if it did it would likely be the end of our current civilization.

[0] https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

Command and Control by Eric Schlosser[0] was a fascinating (and chilling) read about many of the incidents people have mentioned in the comments here. It focuses closely on the 374-7 Damascus Incident, but covers many many other “Broken Arrow”[1] incidents that have occurred.

It’s not a short read, but it’s eye-opening from the engineering perspective that nuclear arsenals are wildly complicated beasts with on-going maintenance, like any machine.

EDIT: It’s also available as a documentary on Netflix[2] Not as in-depth, but it covers the Damascus incident pretty well.

[0] https://www.amazon.com/dp/0143125788

[1] https://en.wikipedia.org/wiki/United_States_military_nuclear...

[2] https://www.netflix.com/title/80107656

phasetransition
Eric is the author of the linked article
teeray
Heh, so he is! Totally missed that. The writing style (and subject matter) brought him back to mind.
Eric schlossers Command and Control provide accounts of many 'close calls' in the US nuclear program

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

Highly suggest reading "command and control" by eric schlosser (fast food nation). Terrifying but amazing book: https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

I'm really not sure why we're still alive.

kqr2
There is also a documentary which includes some reenactments of the incident:

http://www.commandandcontrolfilm.com/

AdamJacobMuller
Wow, I had no idea. Thank you.
throwaway7645
Watched this recently. We almost lost Little Rock & most of Arkansas along with western Tennessee (Memphis). The documentary was horrifying.
leeoniya
also

https://en.m.wikipedia.org/wiki/1961_Goldsboro_B-52_crash

acqq
Yes, there were two bombs as the plane broke. One bomb, from its internal perspective, had a "fully normal drop": there was the switch that was triggered only once the bomb actually leaves the plane through the expected door -- it was activated as the plane broke exactly in a way that from the bomb perspective it simply wasn't an accident but "do it." On that bomb, the only switch that wasn't "on" was the one which a crew member was supposed to pull prior to the drop.

On the another bomb, from which point of view the dropping sequence was less "normal" (the breakup of the plane made less clear-cut case from the perspective of that one, so some internal components didn't activate) the same switch was discovered in the "on" position.

So it was really, really close call.

(A "small" curiosity: These bombs are two-stage bombs, one weaker nuclear explosion forces the next, one stronger. The "weaker one" part of one of the bombs, with its radioactive content that has enough material for a nuclear explosion is still there(!) deep in the ground.)

flashman
> We almost lost Little Rock & most of Arkansas

Hardly. For starters, I don't agree that the risk of nuclear explosion was high, given the electronic precision required to detonate a hydrogen bomb's explosive lenses correctly. And had the W53 warhead exploded at its full nine-megaton potential, Nukemap[1] predicts 3rd-degree burns would not have extended even as far as Conway, and 11000 casualties of which 2500 were fatal (probably much less with a 1980 population).

The state would have faced a greater threat from fallout, but considering the advance warning of a detonation, lack of damage to infrastructure outside the immediate blast zone, and general cold war readiness for the effects of a nuclear explosion, casualties from fallout would have been far below the worst-case scenario. Two days' shelter while fallout radiation fell to 1% of its initial level would be easily achievable.

[1] https://nuclearsecrecy.com/nukemap/

nikanj
People won't even move close to a nuclear power plant, I don't think they'd be willing to move to a fallout zone.

In that sense, we would've lost Arkansas, turning it into a place with hardly any residents

acqq
> given the electronic precision required to detonate a hydrogen bomb's explosive lenses correctly

Please do explain that. As far as I understand, the explosives and the radioactive materials are all there in the warhead, the "precision" only affects the "quality" of the explosion (i.e. the number of "megadeaths" https://en.wikipedia.org/wiki/Megadeath )

Only the first few atom bombs had the missing part of the radioactive matter outside the of the bomb when the bomb was fully unarmed, technically making the explosion impossible, as there isn't enough mass for an explosion without the missing part.

But since the early fifties, all the material needed for the nuclear explosion is always in the warhead.

Note that in the event here discussed (it was an explosion of the fully equipped H-warhead missile in Damascus, Arkansas, US)

https://en.wikipedia.org/wiki/1980_Damascus_Titan_missile_ex...

really nothing "special" happened: one of the technicians just accidentally dropped one metal socket during his maintenance work, that was all:

http://www.pbs.org/wgbh/americanexperience/features/bonus-vi...

And that was enough to cause the explosion of the rocket. Would any "expert" before that event claim it was all that was needed?

palmtree3000
The incident you linked to wasn't a nuclear explosion: the rocket the nuke was sitting on exploded. FTA:

"The W53 warhead landed about 100 feet (30 m) from the launch complex's entry gate; its safety features operated correctly and prevented any loss of radioactive material."

(I am not a nuclear engineer)

If you have a ball of fissile material, there are two sizes that are interesting to you. There's the point at which the material goes critical: on average, every neutron emitted causes the emission of more than one neutron. However, there are two types of neutron emission: prompt neutrons (released immediately when an incoming neutron breaks apart an atom), and delayed neutrons, released eventually by the decay of fission products. If you're producing one prompt neutron on average, then you're at prompt criticality.

Nuclear reactors prompt-subcritical but delayed-critical. Because the exponential growth of neutrons in a delayed-critical material is relatively slow, it can be managed by futzing with control rods.

If undisturbed, a supercritcal fissile mass will explode. The question is, how energetically? If it's delayed-critical, not very. The time scale characterizing the exponential growth is large compared to the time required for the explosion to propagate through the mass, so the mass will explode with not much more than the minimal amount of energy required to render it subcritical again. This is messy, but not what you think of when you imagine a nuclear explosion.

If the mass is prompt-critical, then the exponential growth is much faster. Even once the mass has released enough energy to explode, it still takes time for the mass to expand enough to be rendered subcritical. In that time, a prompt-supercritical mass will undergo many more generations, resulting in an actual nuclear explosion.

To get an actual nuclear detonation, then, you need to get your fissile material from subcritical to prompt-supercritical as fast as possible, so it doesn't predetonate unimpressively. This is a tricky task, requiring carefully shaped explosives, and pretty unlikely to happen by chance.

acqq
> This is a tricky task, requiring carefully shaped explosives, and pretty unlikely to happen by chance

It's not "by chance," the warhead is carefully designed to make it and everything needed is already there, there are no missing parts. The "software" maybe doesn't activate the parts optimally or at all if there's luck but nothing is missing inside.

In the given case, "only" the rocket body exploded (with the fully functional warhead on it) even if it's not designed to do so, and only because of one single fallen small piece of metal (a single socket).

throwaway7645
No it wasn't a nuclear explosion, but it certainly could've been and the fallout would have been very bad. You're way oversimplifying it. Did you watch the documentary a few weeks ago? I'm just saying I did and they interviewed the people involved including the guy who dropped the socket. They also go over just how unsafe these facilities were and a couple of other incidents of even worse magnitude.
rtb
> I'm really not sure why we're still alive.

Anthropic principle / many-worlds interpretation? All the outcomes in which these accidents lead to nuclear war are no longer observable ;-)

That hasn't been historically true. It might be true for newer ones but older designs could have gone off. There's also incidents where the bomb was armed accidentally, say, by it slipping while in a plane. Some of the "intentional" detonators were very simple and could be triggered by a surge.

Source: https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

marvin
Not only that, but designs thought to be "one-point safe" (a detonation starting at one point in the explosive lens will not cause a significant nuclear chain reaction) actually turned out to cause significant nuclear yield if detonated from one point. Same source.
I highly recommend reading https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

Turns out the answer to your question is simply: luck.

Jun 23, 2016 · rrggrr on A Stark Nuclear Warning
Despite three major armed conflicts fought between nuclear powers by proxy, none thankfully have resulted in nuclear war. Deterrence works. What doesn't work, and a flaw that is growing exponentially with nuclear proliferation is failures in design, command and control. The incredible and incredibly frightening book "Command and Control" (see link at bottom), highlights several near catastrophic misses in the US nuclear arsenal. Now multiply by all nuclear states, the risks of accident are terrifying. The world would do well to open source safeguards so that even rogue states (eg. North Korea) can benefit from control and process that mitigate risk of unintended nuclear detonation.

https://www.amazon.com/Command-Control-Damascus-Accident-Ill...

zalg0

  > so that even rogue states can benefit 
Dude, human factors like personality and mood quickly render safeguards moot, within the atmosphere of a despotic dictatorship.

Open source any amount of safety protocals, and pray that they get used, and you'll acquaint yourself with disappointment.

People who concoct nuclear programs are toying with suicide, and their level of interest in safety can be understood by their decision to weaponize such a thing to begin with.

Meanwhile, cap such a program with a lone psychotic emperor, prone to fits of cruel and unusual punishment, and see what he does on a bad day, as his body ages, and he grows crankier with every waking morning. Now surround him with an echo-chamber of eager-to-please, hawkish, testosterone-fueled military upstarts.

Sounds like a pretty cool video game plot.

aswanson
Sounds like a trump presidency.
rrggrr
I'm no fan of Trump and I think its thankfully unlikely he'll be the US President. But if he is President there is this quote from Trump that suggests he has more self-awareness and emotional intelligence than his carefully crafted persona suggests.

"When I'm wounded, I go after people hard and I try to unwound myself."

If failed Presidents have had one thing in common its a failure to recognize their own limitations and failings before or after-the-fact. Again, no fan of the guy, but I think the scare-mongering is overblown.

mikeash
Does the fact that Trump knows he's an insecure, vengeful dickhead really improve things that much?
StillBored
I don't think the issue is whether Trump/Clinton will initiate a nuclear confrontation directly. The question is whether either of them have the qualities it takes to stand down if/when the US is attacked by an insignificant foe, rather than retaliating against someone with nukes.

Frankly, I don't think they are any different in that regard. Neither would be willing to accept the responsibility of being impeached/etc in order to avoid a larger confrontation. Second term Obama on the other hand probably would, I think he has changed in that regard. I say that because he seems to be doing a fine job of realizing that we can't solve the Syrian problem with more guns/bombs. See Libya, etc...

HillRat
I think you're being too charitable in your reading of that statement; that wasn't Trump acknowledging that he overreacts in an attempt to heal self-traumatization, but (in context) him justifying his behavior by saying that his actions are always in response to others' (unprovoked) attacks.

If you like, it seems a classic example of narcissistic rage: Trump invests so much psychological effort in maintaining his self-image, in the face of what appears to have been a childhood lacking normal parental social support, that any attempt to undermine that image is to him an existential threat that demands immediate and overwhelming aggression in response, until the targeted individual responds by agreeing to accept and thus reinforce what is a grandiose illusion.

Really, we haven't seen someone like this anywhere near the White House since late-stage Nixon (when Kissinger had firm control of the reins -- and, allegedly, had standing orders for military, DoD and State to ignore any orders from the President himself, particularly those regarding the nuclear codes). It's virtually impossible to predict his actions, because he's driven by an immediate need to assuage his sense of victimization.

rrggrr
Quite the opposite. Firstly, despotic regimes are by their nature control driven, one common characteristic of which is an inability to act without central directives from leadership. Iraq is a classic case. Despotic regimes persist for the purposes of preserving and enriching the despot, where even the intent to use nuclear arms in an age of deterrence is extremely counter-productive to that goal. North Korea, Russia and other nations demonstrate the futility of preventing proliferation. But there has been some success in educating fledgling nuclear states on safeguards (reportedly Pakistan). If you can't eliminate it, mitigate it as best you can and manage the risks.
narag
one common characteristic of which is an inability to act without central directives from leadership

Yes, very paralizing when there's a shortage of central directives. Not so much when the leadership is happy to show his initiative. What did you think of the innovation of executing high officials using a cannon?

zalg0
I am dubious. Mostly, because preserving and enriching a despot means catering to their whims.

A despot's pet nuclear project is likely to have an authentic red button (provided in any variety of interfaces, telephone, mechanical lock and key, fingerprint authenticated, whatever). If one imagines themselves as a despot, the ego almost demands it.

Between the despot, The Button, and the operational weapons, there are many layers of human factors. Maybe the red button is a placebo, and the underlings, being less suicidal than their singular leader, would never permit the button to perform its dreaded task, but is that assured?

You left out the worst incident I know of:

http://edition.cnn.com/2014/06/12/us/north-carolina-nuclear-...

The US was one arming switch away from nuking North Carolina in 1961. This and a bunch of other really scary nuclear-related accidents are covered in Command & Control: http://www.amazon.com/Command-Control-Damascus-Accident-Illu...

Command and Control, a historical account of failures in safety and other procedures with america's nuclear arsenal:

http://www.amazon.com/Command-Control-Damascus-Accident-Illu...

habitmelon
I'm reading that one right now too, pretty intense, I usually make the assumption that people in other industries always have a level of expertise and proper practice that puts software to shame. However, hearing about the haphazard construction of early nuclear weapons and all the things that went wrong managing the Titan II complexes was really revealing.
jamesmishra
I just finished that book last night! It is a must-read for people in systems engineering. I learned a lot about what kind of errors can happen...
"I think the fears about “evil killer robots” are overblown. There’s a big difference between intelligence and sentience. Our software is becoming more intelligent, but that does not imply it is about to become sentient."

It is the period between sentience and "advanced" artificial intelligence that should be worrying for two reasons: First, it is close at hand. More importantly, the unintended consequences of tech are never well thought out in the initial stages of adoption.

I'm reading Eric Schlosser's book on the early days of nuclear weapons and the many, many near misses the US experienced as it adopted nuclear arms without much thought to risk management. I see parallels in the race to develop and deploy pre-sentient A.I. Link below to Schlosser's book.

http://www.amazon.com/Command-Control-Damascus-Accident-Illu...

jeffreyrogers
> It is the period between sentience and "advanced" artificial intelligence that should be worrying for two reasons: First, it is close at hand. More importantly, the unintended consequences of tech are never well thought out in the initial stages of adoption.

Pretty much every academic that I'm aware of who works in the field of AI has similar views to Ng, i.e., suggestions of an impending AI cataclysm are wildly overblown and don't fit the facts. The people predicting doom are those who for the most part aren't involved with current research and as such don't have a good perspective on the limitations of current techniques.

There is no magic in modern AI advances and there is no reason to expect that they'll yield anything like generalized intelligence. By the nature of the methods they work well in the specific domains they're trained on and poorly everywhere else.

soup10
The difference between a ball of uranium and a nuclear bomb is... a slightly bigger ball. Dangerous autonomous software is very easy to imagine. What if someone hid code in flight control software that activated at a given time and sent all planes flying into the nearest building. Software is only safe because we program it to be safe. The more capable and widespread software becomes the more dangerous it becomes.
jeffreyrogers
Yes, but no physicist will tell you its safe to leave uranium lying around. Virtually everyone working in AI, including the biggest names in the field, will tell you that the methods are very domain specific (they generalize, but you have to train them specifically for a given domain), and bear no resemblance to sentience as it is typically understood other than that they give rise to seemingly intelligent behavior. Which actually isn't so much intelligence, as discovering a function mapping inputs to outputs.

And no one is arguing that software can't be dangerous. It clearly can, just like poorly constructed bridges or cars can be dangerous. But the idea of machines thinking for themselves is science fiction. There is no indication that we're anywhere close to that happening.

jsprogrammer
>The difference between a ball of uranium and a nuclear bomb is... a slightly bigger ball.

And a very precise detonation device strapped around said ball of uranium.

Software is not by itself dangerous. It only exists in the most abstract sense. What is dangerous is creating physical devices that have the ability to cause physical damage if manipulated in a certain way. Allowing those devices to be controlled autonomously through a software-hardware interface is where the danger from "AI" comes in.

MattHeard
Or if you plug the software into major financial trading hubs... oh...
jsprogrammer
Market should take care of it, no?
firebones
No, they'll just roll back selected transactions arbitrarily like they did in past flash crashes.

We're much closer to runaway biological threats than we are to AI-inspired ones. Or maybe that's my knowledge gap in the two domains speaking and we're equally far away in each.

chubot
I've been thinking about what I visualize as "plant" intelligence, in response to Elon Musk's statements about killer AI. It seems like Andrew Ng is saying the same thing.

The metaphor is that you could have "sedentary" superintelligences that solve a relatively narrow problem. They take in huge amounts of data, and spit out ingenious answers to well-defined problems. Things like traffic-routing, energy allocation, or photo/video recognition, etc. They are tools for humans.

By what mechanism do these superintelligences involve into organisms motivated to extinguish humans? Reproduction, which leads to independent evolution. That process is inherently unpredictable.

But right now we're firmly in the stage of humans creating AI. Our AI is nowhere near capable enough to create more AI. And I would argue that there is no possibility of this happening by "accident"; you would have to specifically engineer AI to able to create more AI.

I agree with Ng in that it's more of a distraction now than anything else. The jobs issue is a lot more pertinent.

It reminds me of this book I got at Google 8 or 9 years ago. This guy asked a robotic professors how they would avoid being killed by a robot. And the professor said: "Climb up one step" (on a staircase).

gojomo
"Our AI is nowhere near capable enough to create more AI."

Are you sure? Lots of machine-learning/optimization processes already work much like evolution.

How many people thought an email worm could bring down thousands of machines around the world in 1988?

http://en.wikipedia.org/wiki/Morris_worm

If one (or several) of the robots depicted in the below video wants someone dead, the "stair step" defense is not going to be enough:

https://www.youtube.com/watch?v=QVdQM47Av20

chubot
Well, I don't want to dismiss the possibility too quickly, since there appear to be researchers taking it seriously:

http://futureoflife.org/static/data/documents/research_prior... (section 3.4, Control, although the general skepticism is noted)

But I will say that there is a difference between feedback and unconstrained evolution. Machine learning mechanisms are feedback loops. But that doesn't mean they will break out of their own paradigm without human intervention.

Malware is actually a good example.

The Morris worm is a static piece of program text, invented by a human. It took advantage of a homogeneous and static environment to spread widely (lots of computers running the same program with the same vulnerability.) There's a huge difference between spreading in this environment and adapting / reproducing so as to spread in novel environments.

Think about Stuxnet. AFAICT, this was a completely novel solution to a problem that had never been seen before: jumping an air gap and destroying a piece of hardware via software.

What do you think is more fruitful approach to designing something like Stuxnet?

1) Assemble the best and brightest minds from multiple domains, painstakingly build simulation environments based on the most plausible intelligence, test custom malware laboriously in those environments, etc.

2) Start with some existing malware (not unlike the Morris worm), and design a meta-algorithm to evolve it into something capable of destroying a nuclear plant from afar?

#2 is science fiction, as far as I can tell. The level of ingenuity required for a task like this is just qualitatively outside the domain of computers.

This is closer to what would be required for us to lose control: for AI to be designing AI, rather than humans designing AI. And keep in mind that AI designing AI is harder than AI designing malware.

I have seen the demos from Deep Mind where it learns to play video games without any knowledge of the rules. I don't claim to fully understand these techniques. (I have however worked in a research lab on robotics, and seen the vast gulf between what the media portrays and reality.)

But I would be interested if anyone knowledgeable in those techniques sees a realistic path from deep learning to something like #2.

billsimpson
Accidentally creating sentient AI with today's technology would be like accidentally creating a nuclear explosion by rubbing two sticks together.
djulius
Couldn't agree much. Every time there is some advance in so called "AI", killer robot and others strike back.

These are just advances in doing things human can do, i.e. recognizing in pictures, driving a car. Deep learning is known since long, calculation were just too long.

Fundamentally nothing has changed, we're not closer to an "intelligent machine".

None
None
PakG1
What I wish I'd see more of is a fear of AI even without general intelligence. When you combine autonomy with AI, even stupid AI, you can still have bad results and significant collateral damage. See Knight Capital's trading bug, Roombas that can't distinguish between dust and dog poo, and more. Yes, Google's self-driving cars may end up being more reliable and safe than average human drivers. But that would assume the engineering team has a good handle on all situations and bugs. I'm not saying hold up progress on this front. I'm only saying that people seem to be happily buying into even trivial AI advances without both eyes open.
collyw
Is this not more of a general problem where as a software engineer you need to think of ALL possible outcomes rather than just the expected ones?
narrator
In Yemen or rural Afghanistan or Pakistan the inhabitants already have to deal with killer flying robots that are currently people operated but facing a manpower shortage. They will probably eventually be 100% automated. Just like any military action, whether the robots/algorithms or people who carry out these actions are perceived as evil or not depends on which country one lives in and what one's political beliefs are and whether one knows and how one feels about the people killed by said actions. In the future I expect it will be exactly the same. The future is already here, it's just not evenly distributed.
jsprogrammer
Flying robots with missiles/guns/high-power lasers/weapons should never be automated.
gojomo
And this "should never" pronouncement will be enforced by you and what army?
jsprogrammer
Maybe we should have global drone coverage armed with nuclear bombs? MAD's the only way to do, right?
notatoad
An autonomous drone that has been tasked to kill a human and a machine that decides on its own to kill a human are very much not the same thing. Autonomous killing is nothing new or novel - we've had sentry guns for many years now.
karmacondon
This appears to be an unpopular idea on hn, but: We are nowhere close to developing sentient machines. We have made no progress toward it, we're not getting closer. Statistical optimization and sentience are parallel lines. No matter how far you travel down one, you don't get any closer to the other.

If "evil killer robots" are created, it won't be because of advances in big data or deep learning or any other buzzword. It'll happen independently, potentially aided by current state of the art techniques but not because of them. It could happen at any time, some guy in his basement could be on the verge of a breakthrough as we speak. But google's ability to detect cat faces or Watson's winning on jeopardy are not signs of a coming apocalypse. It's highly unlikely that we can cluster or gradient descent our way to human level of intelligence, no matter how many cores or hidden layers are used.

This whole "famous people are worried about AI" thing is understandably media friendly, but it's a bit of a distraction. Fortunately the majority of active researchers aren't taking it seriously at all.

olalonde
To be honest, I'm far more worried by cellular level simulations like OpenWorm[0] because there seems to be a clearer path to human level AI: more processing power and more advanced knowledge of brain biology. Given enough time and effort, it seems inevitable to me. Luckily or unluckily - depending on your perspective - we're still far from simulating a human brain.

[0] http://www.artificialbrains.com/openworm

djulius
"The modelling data that is processed by the engine is written in XML."

You don't need to fear much, by the time the engine evolves data into something scary, the universe is long gone.

3pt14159
If nature clustered or gradient'ed its way to human intelligence why can't we? We have algorithms that can preform as well as fruit flies in simulations; why can't we continue to expand on the quality of our simulations and algorithmic approaches?
Houshalter
>Statistical optimization and sentience are parallel lines. No matter how far you travel down one, you don't get any closer to the other.

Source? How can you possibly know how AI will be developed? Machine learning is likely to be a huge component in any AI. Deep learning has been shown to beat all sorts of tasks thought to require strong AI. it can learn complicated patterns and heuristics from raw data, which is the hardest part of AI.

It's quite possible we are 99% of the way there, and we just need someone to figure out how to use it for general intelligence.

Regardless, this is all irrelevant. The people concerned about AI are mostly worried for the long term. No one is claiming we will have AI in ten years for certain. I'm not sure why people keep equating these two totally different predictions.

That's why ML researchers saying the stuff they work on isn't dangerous, isn't very reassuring. No one is claiming that it is.

For those who are still interested after reading the article the information is taken from the author's book [1], which I just finished and thoroughly enjoyed/was terrified by.

[1] http://www.amazon.com/Command-Control-Damascus-Accident-Illu...

"The Dead Hand" combined with "Command and Control" (http://www.amazon.com/Command-Control-Damascus-Accident-Illu...) makes for very sobering reading. Put the two together, and it's a genuine miracle that nothing bad happened. Of course, we're not out of the woods...

Both are excellent books, BTW, so I agree with the parent poster's recommendation.

arethuza
Also "One Minute to Midnight: Kennedy, Khrushchev, and Castro on the Brink of Nuclear War" - which is pretty alarming even when we know how it all worked out:

http://www.amazon.com/One-Minute-Midnight-Kennedy-Khrushchev...

HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.