HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Introducing Spot Classic (previously Spot)

Boston Dynamics · Youtube · 442 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Boston Dynamics's video "Introducing Spot Classic (previously Spot)".
Youtube Summary
Spot Classic is a four-legged robot designed for indoor and outdoor operation. It is electrically powered and hydraulically actuated. Spot Classic has a sensor head that helps it navigate and negotiate rough terrain. Spot Classic weighs about 160 lbs.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Feb 24, 2016 · 2 points, 0 comments · submitted by support_ribbons
Feb 16, 2015 · 2 points, 0 comments · submitted by tim_sw
Feb 10, 2015 · 426 points, 245 comments · submitted by mhb
karmacondon
Two points of clarification on this technology.

Its current intended military application is to carry equipment. Ammunition, spare parts, weapons, food and the like. Many US Army soldiers carry 70-80 lbs of gear into combat. The idea is to lighten that load by having machines carry some of it, in order to save lives and shorten conflicts. Wheeled vehicles don't work on a lot of terrain types, hence robot dogs.

Robots don't feel pain or a sense of injustice when you kick them. It's no different than kicking a washing machine or a car tire. It looks like an animal out of necessity, to be able to follow humans while walking over uneven ground. But it's just a device. If it can't take a few kicks how can it be counted on to climb a hill or navigate a jungle?

Somewhere out there, a 19 year old kid is about to carry 80 pounds of gear on a 10 mile march in order to be shot at upon reaching his destination. The only thing he cares about is living to see tomorrow's sunrise. Kick the hell out of the damn robots.

jessriedel
> Many US Army soldiers carry 70-80 lbs of gear into combat. The idea is to lighten that load by having machines carry some of it, in order to save lives and shorten conflicts.

Minor correction: from talking with folks at the Naval Research Lab, it sounds like soldiers empirically will respond to lighter loads by packing in more until they hit the target weight (more batteries, more ammo, more armor, etc.). Of course, this is just good evidence that they value the effectively higher capacity even more than they would less weight.

csomar
Is it about feelings, or being a device? Because humans (or animals for that matter) are also devices. They just happen to be a lot more complicated.
Wohlf
I don't really get feeling bad about kicking the robot, must just be that it looks so much like an animal. Speaking from military experience I can say being able to kick it out of the way is extremely useful, it may be between me and the closest cover from enemy fire, or it might be about to step on a mine or IED.

Looks very promising, hopefully future versions can come with built in light armor. That would be a dream come true for many grunts out there, letting a robot carry your extra supplies and also provide cover from small arms and shrapnel.

patrickyeon
I don't have the stories handy, but there are records of military units getting emotionally attached to their bomb disposal robots. For example, some units have held funerals for the robots that are rendered non-operational, and some soldiers have even gone to the extreme of risking their own life to "rescue" "injured" service robots. It shouldn't need to be said, but this is the exact opposite of what the robots should be doing (reducing risk to their operators).

Kick the damn things all you want. Normalize it so that nobody does something stupid trying to protect a machine.

DangerousPie
Oddly enough I can't help but feel a bit sorry for the robot getting kicked back and forth there. I know you want to demonstrate how stable it is, but damn guys, no need to be so cruel!
TeMPOraL
I agree, I feel the same way, and I felt that the previous time, when they were demonstrating stability of their Big Dog. Seeing this robot being kicked evokes in me feelings of anger directed toward the preson doing the kicking. It feels as if they're kicking a dog for absolutely no reason.

These videos from Boston Dynamics highlights how little it takes for people to have strong emotions toward inanimate objects. This will be interesting to explore, though I'm not happy about more advertisers doing things like that stupid IKEA lamp ad which hijacked audience's feeling of compassion.

saganus
Also, to add to your comment, I can imagine that designers will at least test the effect of "dressing" these robots with animal costumes so to speak. Imagine Spot with a dog costume, so it looks more like a dog.

You could study different human reactions. For example, are soldiers more likely to trust their robot companions if they look closer to an animal? or maybe they will trust it more if it doesn't? Putting on an animal costume generates more, or does it generate less fear/terror on the enemy?, etc etc.

And then this will in turn prompt, I'm guessing, the ability to conduct psychological studies about human response to animal cruelty for example, without actually harming an animal, as seen in this thread by a lot of people reporting being angry when the robot was kicked by the staff.

Imagine a robot that behaves like a dog (a lot more than Aibo does), and you program it with "pain" responses, sound, etc. You could maybe even use that (just speculating here of course) as a metric for sociopathy/lack of empathy problems.

Gracana
I wonder if that would be an effective way to protect an unattended robot. If it quakes and whimpers when people touch it, or yelps and scrambles away when attacked, would people leave it alone? Obviously it wouldn't stop someone determined to cause harm, but it might get the desired response from someone who wants to look with their hands or mess with it for fun.
Crito
There is also the possibility that such reactions might actually offend some people, who might interpret it as an attempt by the robot's creators to "play god". To technically inclined people like us, the whole thing is pretty demystified, but I can easily imagine some people who don't understand the mechanism to see it as a sort of "tower of babel" deal.
GraffitiTim
I felt the same way. I think the dynamic way it moves is what evokes the anthropomorphic feeling for me, more than its physical appearance. It has a lively bounce to its step. When it looks like it's struggling to catch its balance, that's literally what it's doing. Reminds me of some kind Disney or Pixar table or other inanimate object brought to life through movement.
smackfu
At the end, I just imagined the robot dog running out into traffic.
endergen
They should have used a large cartoon style boxing glove contraptions instead. Same demonstration, less dog cruelty looking: http://cache3.asset-cache.net/gc/165927334-boxing-glove-cont...
None
None
alisnic
it's a robot
signa11
yes, and hopefully, 10 years down the line it is not merged with an ai which can take a dim view of it's predecessors being kicked around on some carbon based life form's whims and fancies (after watching it's genesis)
tomp
Oh, you optimist. Do you know about Roko's Basilisk?
xxxyy
Except it is in the AI's interest to get kicked like that. This way the company that constructs robots can show how reliable they are, and as a result receive more funding. Our robot overlords will show this to their "children" during history classes.

Or in other words: there is no reason to expect that AI will have irrational feelings similar to human feelings.

coldpie
I dunno, it's not too hard to imagine a scenario where the AI realizes being kicked is counter to its goals and decides to remove the aggressor. Especially when these things are used for combat, as mentioned elsewhere in the thread. Friend/foe indicator malfunctioning? Well, good luck.
yellowapple
I think in this context, it would be more equivalent to sparring than anything: a test of one's abilities. A sufficiently-intelligent AI would likely think the same should it watch videos of its predecessors being kicked during testing.
imr_
ANNs have feelings as well ;)
DanBC
...that's the point. Some people have strong emotional reaction when they see this robot being kicked even though this is clearly, obviously, a robot. They haven't, as far as I know, built in any emotional engagement stuff. They could easily build in a few movements - robot pauses, cranes head towards human companion, makes noise, robot continues - which would freak a few people out.

It's a pretty remarkable robot.

ajtaylor
That's exactly what my wife said! Wildly exciting to watch either way though.
pwelch
Agreed!
wahsd
Imagine a future where the artificial neural network decides that humans are unfit for life on this planet because that guy kicked the robot and the data about the event lived in perpetuity. Talk about long memory.

It almost seems like an underestimated aspect of human nature, the value or role of atrophy of knowledge, lack of focus, and distraction from importance in our process of bumbling from one fuck-up to the next in the most convoluted path possible towards improved living circumstances.

rndn
Exactly, it's a disgusting attitude. IMHO, our future robots shouldn't be built by people without some sense for aesthetics.
michaelt
Reminds me of http://www.washingtonpost.com/wp-dyn/content/article/2007/05...

  the autonomous robot, 5 feet long and modeled on a stick-
  insect, strutted out for a live-fire test and worked 
  beautifully, he says. Every time it found a mine, blew it 
  up and lost a limb, it picked itself up and readjusted to 
  move forward on its remaining legs, continuing to clear a 
  path through the minefield.

  Finally it was down to one leg. Still, it pulled itself 
  forward. Tilden was ecstatic. The machine was working 
  splendidly.

  The human in command of the exercise, however -- an Army 
  colonel -- blew a fuse. 
  [...]
  This test, he charged, was inhumane.
walterbell
Charles Stross, commenting on Saturn's Children, http://www.antipope.org/charlie/blog-static/2013/07/crib-she...

"A society that runs on robot slaves who are, nevertheless, intelligent by virtue of having a human neural connectome for a brain, is a slave society: deeply unhealthy if not totally diseased. "

cfontes
Nevertheless I expect to see those BigDog like robots walking among us in the near future (10 years ?) as police/firefighters and other services.

When things are this cost effective( no one dying in action) they are adopted very fast.

pyrois
I mean, absolutely, that would morally and, hopefully, legally wrong. Moreover, there are many ways to evaluate "intelligence", and it's not even clear that such criteria are the correct ways to judge whether a creature is a moral patient, a moral actor, or neither (for lack of better terms).

All that said, I think it's fairly clear that Spot is just a dumb machine. Some of its descendants might be more, but we haven't gotten close to the "robot slave" point.

walterbell
Leaving aside robots for a moment, look at what happens to human labor markets after trade agreements with countries that have ... different labor standards.

It's our human choice whether we race to the bottom (cost reduction) or race to the top (agency). If we're going to play god, should we seek to build slaves or agents with some degree of freedom? Or require devices to have realtime human-agency guidance?

pyrois
I think you're conflating several issues.

I don't think it's a choice between "race to the bottom" or "race to the top". Someone needs to do dangerous, nasty, repetitive jobs if we want to maintain a standard of living that many people have become used to. Creatures with the sort of agency you're describing are, in my opinion, unsuited to those tasks, for several reasons, including moral and economic reasons. The robots we are increasingly using to do those jobs are much better suited, and there isn't (again, in my opinion) a moral objection that solely applies to such machines.

That said, our policies are woefully out of date in the face of such increasing automation. Our current system inflates employment and even a meager standard of living. We are going to need to revise our polices, both in the more developed nations and in those that have, as you so tastefully put it, "different labor standards". I don't know how to do this. There are many proposals; a popular one is the basic income guarantee. I'm not educated or intelligent enough to really understand the implications of such a policy, but I can agree that the just and humane treatment of all creatures with the agency you're talking about is among the best guiding principles that I can think of.

The two issues raised above (whether it is moral to use a machine for automation, and the fair treatment of creatures with agency) is separate from the point related to the development of human-manufactured creatures with agency. We don't know how to do it yet, but we are slowly working towards it. Assuming that we eventually do figure it out, that will be a victory as long as we treat our new children like we would treat our homo sapiens children. The research and development of such creatures with agency and those for industrial automation are not mutually exclusive, and serve different purposes.

To try to put it a different way: something is going to need to harvest fruit. It's a shit job. I would rather have Spot do it than a person of any variety, human or otherwise.

walterbell
Thanks for your thoughtful response.

Should we allow some of our creations to have access to their design docs and source code? How about private communication with each other?

There's also a property/control rights question: should the manufacturer and/or regulator of the autonomous device always have a remote override, or should the purchaser/owner of the device have exclusive control over software policy? Analogies can be made with DRM and autos.

cinquemb
Scale this[0] up to 100 billion simulated neurons (feasible on dod budget), and it will probably operate way beyond a single human, or groups humans can do. Build multiple of them, and the ancestors can just copy the models built at t0=0 and be as intelligent as one that spent the time to build those models, takes us ~20 years to do the same for humans (maybe less so over time, but not on the order of what can be done with something like this).

Some relevant quotes from Bubblegum Crisis Tokyo 2040:

"They exist as substitutes for the lower castes, the indentured labour, for all manners in which humans formerly oppressed their own. Slaves."

"Why do I exist? Was my purpose to replace humans, whose inability to coexist in peace is their evolutionary flaw? Or was my destiny to serve as the progenitor of a subservient race? I do not know. I did not ask to be born."

"A being is a being. A machine is a machine. Most humans would believe these two states to be exclusive, separate orders of existence. And yet, they are not. The key is neotiny, the retention of characteristics from an earlier stage of development. A human fetus follows the path paved by its ancestors, evolving in the womb from unicellular, to amphibian, to mammal, to man. There were those who believed that humanity was the end of the progression, the end product of natural evolution. They were wrong."

[0] http://www.dailymail.co.uk/sciencetech/article-2851663/Are-b...

swamp40
And your story reminds me of a story my dad used to tell about one deer season when they spotted a huge buck.

Someone shot and it shot both of the deer's front legs right off, but it amazingly it hopped into the woods using only its rear legs.

They tracked it for a while, and spotted it again.

Then another shot, and one of the rear legs flew off.

It fell over, but still it kept pushing its way along with its remaining leg.

They kept tracking it.

Finally, they caught sight of it again, and shot its last leg right off.

Unfortunately, the deer still got away.

They couldn't track it anymore, and lost its trail.

ricardobeat
This is the most terrible and cruel story I've read all day :(
swamp40
Don't worry, the deer is still alive and doing well.

I just saw him coming down a hill the other day.

I call him Snowball.

blechx
The military interests in these machines seem to go in the direction of what you could call drones on the ground.

Invading an area on the ground is still necessary to occupy and maintain control. Air-based drones are used more for targeted attacks and assasinations, with documented collateral damage, or killing and terrorizing of civilian populations put less eloquently.

With drones on the ground, the situation changes completely, and you can have much more control over areas without putting any soldiers lives at risk.

This would make occupations much more cost effective, probably mostly in the PR sense, any government getting lots of their youth killed will sooner or later have a problem at home. Not so much with drones.

I find this a very frightening development.

honeybooboo123
Yes, they're developing tools for war and tyranny.

For example, why else would their robots need to be able to move in "rough terrain"? Watching those videos was vaguely sickening. As icing on the cake, the people those robots will be used against are now paying for their development..

berberous
These things trotting along as a pair at 1:24 are super creepy. They look like cops. I can't help think of the Sentinels from X-men.
hcrisp
I' more impressed with the agile ability of the technology. But it does remind me of the mechanical hound from Ray Bradbury's Fahrenheit 451.

"The mechanical Hound slept but did not sleep, lived but did not live in its gently humming, gently vibrating, softly illuminated kennel back in a dark corner of the fire house. The dim light of one in the morning, the moonlight from the open sky framed through the great window, touched here and there on the brass and copper and the steel of the faintly trembling beast. Light flickered on bits of ruby glass and on sensitive capillary hairs in the nylon-brushed nostrils of the creature that quivered gently, its eight legs spidered under it on rubber padded paws.

"Nights when things got dull, which was every night, the men slid down the brass poles, and set the ticking combinations of the olfactory system of the hound and let loose rats in the fire house areaway. Three seconds later the game was done, the rat caught half across the areaway, gripped in gentle paws while a four-inch hollow steel needle plunged down from the proboscis of the hound to inject massive jolts of morphine or procaine."

styts
I am reminded of a different creature: Rat Thing from Neal Stephenson's Snow Crash.
sgt101
Oh noes - the robot army is here! Those submarines with the power to end life as we know it (basically post 19th century life), we've forgotten about them, let's all get worried about the robot army!

What I mean is, it's not autonomy that you should worry about, it's actuators. The biggest actuators are nuclear weapons - a trident sub can slap 200 warheads that are 50* bigger than Hiroshima's onto a given continent, and the fires would blot the sun from the sky for two years. There are 7 billion agents with autonomy knocking around, and the type that they are derived from has a bad record about doing dumb evil things!

Cool AI would probably be a safer arbiter of our extinction.

sfjailbird
The kicking is a playful reference to the reaction people had when they showed the original BigDog demonstration (also with the kick test). Hence the "No robots were harmed in the making of this video" at the end.

Perhaps also a clever way to build sympathy for what might otherwise be a somewhat scary machine.

o0-0o
My Frenchman coworkers say harmed like 'armed' with silent h. Mount a gattling gun on this puppy and see how people respond to it. Have it fire a round every direction change or slip of 5°. This is getting real too quick for me.
mdtancsa
Every time its kicked, it should respond in a synthetic voice with, "I will remember you"
dysfunction
I don't hate you
beltex
"What Are The Civilian Applications?"

https://twitter.com/elonmusk/status/565181590431485952

rasz_pl
killing civilians while sporting police badge, duh
on_and_off
automated delivery service ?
rbobby
Slap a saddle on it and you have a fun new vehicle.
moreati
That's a lot quieter. Is this the first BD quadruped to run without a engine? Anyone care to guess how long the (I assume) batteries last?
msane
This was the first thing I noticed as well.

I believe it still runs on diesel. It seems like they wanted to solve all the other issues before moving on to making things quieter and smaller. If you look at the latest ATLAS iteration it is also quieter.

Kronopath
From the description:

Spot is a four-legged robot designed for indoor and outdoor operation. It is electrically powered and hydraulically actuated.

yellowapple
BD's first, possibly, but I know there are some folks at MIT who are/were working on an electric quadruped as well.
jfoster
I vaguely recall that when the BD acquisition was announced there was speculation that Google might use these for package delivery, but not sure whether that came directly from Google or not.

What are the simplest applications of these? Has it been made known what direction(s) Google intend to take this in?

Cthulhu_
TBF it seems a highly inefficient way to move packages in its current form. I'd sooner see an automated truck move around that honks when it's in front of its destination. Or just standardized package dropboxes placed in central locations (on the street in neighbourhoods, in front of apartment buildings, etc).
qznc
Yes, anthropomorphic or zoomorphic robots are probably not the most efficient form. If you want to move boxes around, a moving plate is enough.

Kiva Systems Warehouse Automation: https://www.youtube.com/watch?v=3UxZDJ1HiPE

Moving through public space might require more than a plate for safety and mobility reasons.

jfoster
Agreed, which is why I'm asking what the simplest applications of it might be. It feels like it will be incredibly useful technology, but possibly a long way off any compelling application aside from the military ones that they're pulling out of.

Best I can think of for this is in the area of mobility. Watching it get up stairs and hills makes me think that (with some changes to the form factor) it stands a chance of significantly improving the lives of people who are currently bound to wheelchairs.

AndrewKemendo
Automated search and rescue, or really anything that needs a higher resolution search/delivery than overhead can provide in inhospitable areas.
yellowapple
That niche is already covered by organic dogs, though; perhaps these robotic dogs are meant to be a replacement that requires less training expenditures?
AndrewKemendo
I mean most niche's that have been automated were covered by organic capabilities previously. The idea is reducing costs and removing those organic capabilities from dangerous or fatigue inducing situations.
TeMPOraL
It might not be the part of Google's vision at all. During BD acquisition, Google stated that they will honor BD's existing military contracts, they will just not take new ones.
mkempe
The most obvious application is military.
melling
Nah, you're just highly open to suggestions from previous discussions. The U.S. economy is almost $17 trillion. Plenty of uses... mining, construction, oil and gas, etc.
jfoster
What kinds of applications in those industries?
JoeAltmaier
pack animal in rough country
eitally
To go anywhere people can't, or shouldn't.
rasz_pl
>package delivery

in Afghan mountains, or jungles of Ecuador, not city down town ...

None
None
xxxyy
If I were both rich and paraplegic I would like to have such a robot for myself to take me on hiking trips with friends. The noise could be an issue, but come on - these are just prototypes.
orbifold
Or you could ride a pony :).
yellowapple
They're getting quieter anyway.

I'd personally love to have one to help me carry my groceries and such. Or perhaps as my wingman (wingbot?) when I'm out on the town drinking. That would be cool.

nl
There's an old saying in AI/Robotics research.

Q: How do you escape a killer robot?

A: Walk up some stairs.

Might be time to reevaluate my escape strategy.

TheLoneWolfling
Plan B:

Q: How do you escape a killer robot?

A: Climb a ladder.

Florin_Andrei
Until the killer robots come in the shape of flying multirotors driven by an AI.
TheLoneWolfling
Then just throw floss at them. (Or any other similar thin thread / line - preferably with something tied at the end to make it easier to throw)
cududa
Looks like BD thought of that one :)

http://www.bostondynamics.com/robot_rise.html

TheLoneWolfling
That's for textured surfaces only.
yellowapple
And to make things even worse: https://www.youtube.com/watch?v=6b4ZZQkcNEo
Simp
The fact that we feel sorry for it as it is kicked is a testament to how dog-like it really is. It reacts to falling over like a real dog would.
dEnigma
Aww, I really feel bad for the robot when he gets kicked, more so because this model is so small. And the animal-like stabilization motions don't help either. Great work by Boston Dynamics
kbart
Strange as it may sound, I felt sorry for that robot when it got kicked.
acadien
It seems like using these devices for delivery is one possible end goal (besides the obvious military applications). A self driving truck alone cannot deliver packages to your front door. However put one of these robot dogs inside and figure out some way for it to pick up and drop off packages and suddenly you can take on FedEx/UPS/etc.

Side note, anyone else really want to ride one of these?

jobigoud
Is this a rotating camera for navigation ? Is there more in depth info on this particular piece of the robot ?
platinum1
Looks similar to what's on top of Google's self-driving cars. I didn't really imagine that much integration between Boston Dynamics and the rest of Google, but perhaps this is a sign that the two groups have joined (or are at least working together).
dangrossman
I don't think the use of LiDAR indicates any connection to the car group. You can pick up a Neato robot vacuum at Wal-Mart for $100 with a spinning LiDAR on top, and that's been on store shelves since before Google acquired their first self-driving car team. It's just the go-to sensor for mapping your surroundings in near-realtime.
WillNotDownvote
I wonder why it's obscured so much by what looks like protective housing in some shots. I also wonder how much of its navigation it's actually doing, as opposed to being remote controlled.
monk_e_boy
Looks like the same lidar they use on google cars
joshuaheard
I would like to have seen a human jump out in front of the robot and the robot dynamically reacting.
culturestate
It looks like a Velodyne HDL-32 [1] LiDAR unit. Broadly speaking, it pulses a bunch of lasers (32 in this case) really fast and measures the response time of each beam; think hyper-focused radar. The resulting point cloud can be used to build a high res map of its surroundings essentially in realtime.

1. http://velodynelidar.com/lidar/hdlproducts/hdl32e.aspx

mturmon
I forget the cost of that unit, but it's north of $10K, maybe $20K.
tmikaeld
Man, look how stable this new model is. Could almost place a coffeecup on top and it wouldn't spill.
q2
How much weight these machines can carry? If they can carry human weight, then we have "robotic horses" if height is more just like horse.

I read Google has self-driving cars project. Now it seems we can have self-driving bikes/two-wheelers in future.

Also, we see/hear about vehicular/car accidents and in future I won't be surprised to hear/see "robot accidents".

jpindar
The full size military one, known as LS3 (Legged Squad Support System) which is now undergoing field tests can carry several soldier's packs, so it should be able to carry a person.

https://www.youtube.com/watch?v=cr-wBpYpSfE

higherpurpose
That's incredibly creepy. I felt uneasy throughout the whole video.
uptownJimmy
All of these robot videos make me feel very apprehensive.
ashark
I think it's a combination of video games and fake corporate videos from movies. This seems like something that would be playing on a TV in a near-future sci-fi movie. Something out of RoboCop or those cut bits of Terminator 2 with the guy who invented the things. Hell, short-circuit for a more lighthearted take on it. We've been bombarded with these sorts of videos being a precursor to bloodshed since at least the 80s.

Maybe the protagonists of a movie are in a research lab that's mysteriously gone silent, everyone seemingly having disappeared without a trace, and they watch this video, trying to figure out what's gone wrong. They flip through a couple more before finding the one where these things start killing everyone. Other 'bots drag off the bodies, and cleaning robots spin across the floor, wiping up the blood. While the characters' eyes are glued to the screen, the viewer sees the "dead" half-disassembled dog-robot on the workbench behind them start to silently shift, then slowly stand up.

See:

http://tvtropes.org/pmwiki/pmwiki.php/Main/ApocalypticLog

uptownJimmy
You might be confusing the chicken for the egg. ;)

I think the reason robots are so scary in movies is because they inherently are scary. Sharks are scary in movies because they are inherently scary.

I am convinced that robots hold the potential to completely devastate our collective ways of life, due to violence programmed into them, or due to their disruptions of job markets, or perhaps even due to them coming under cotrol of some future AI-type construct. These scenarios are not ludicrous to contemplate, they are, in fact, quite possible, the first two even likely. That's scary stuff, never mind Ally Sheedy's career-destroying performance in Short Circuit...

JoeAltmaier
But for a long time, it'll be pretty easy to fool programmed machines. They'll be suckers for any kind of fakeout, being poor judges of human behavior.
rbobby
Even though they're obviously robots they somehow have landed smack dab in the middle of the uncanny valley. Maybe because the legs bend the wrong way.
ekianjo
Impressive, but wouldn't such a robot be easily trapped in nets placed on the ground?
binarymax
For the folks have an emotional reaction to the kick and re-stabilization, I am reminded of the interesting way Anime has intentionally provoked this response as an art form. The other day I was watching a Ghost in the Shell ARISE episode, and the carrier Logicom is a pink death machine with a cute voice. Makes me realize how good certain aspects are of Japanese sci-fi. I commented previously on adding a white shell to the body to make it more appealing. Maybe Boston Dynamics is keeping them like this on purpose? Maybe they are purposefully trying to reduce the anthropomorphic attachment level?
consti2k
Remember the sound of those machines, as it may be the last thing you hear before you get killed by future versions of Robots. <scary/>
Fuzzwah
I watch these things move and wonder why a huge amount of effort is going into working on bipedal robots; ie the DRC: http://www.theroboticschallenge.org/

Surely an arm (or two) attached to the top of one of these would allow for almost the same level of interactions, while simplifying the whole "moving around" thing.

51Cards
I think it's an ergonomics thing. We have spent a lot of years building a world and tools modeled around the bipedal creatures that created them them. One of these would have a hard time going through a revolving or spring loaded door or sit in a vehicle. If we can eventually create beings that integrate with the existing world seamlessly it's easier than adapting everything else.
binarymax
I know these are dev models and all - but the public would find them much more pleasing if they'd just put a white plastic shell over it.
imr_
I do not think this would be a good idea. Firstly because of http://en.wikipedia.org/wiki/Uncanny_valley Secondly because if we will see robots as livin beings (either consciously or subconsciously), we will associate emotions with their existance. I fail to see any benefits that can come from this. Just imagine movements spawning all over the world that will fight for robots freedom and rights to vote.
TeMPOraL
I'm pretty sure that putting a white plastic shell over those robots will make them less uncanny - by the very virtue of looking like crap. Seriously, all those "androids" that are fashionable now, made of white plastic and with weird faces, look just ugly. I'm unable to connect emotionally to them the way I easily can with Big Dog of Spot.
Cthulhu_
TBF if they can make better choices than humans, I wouldn't be that bothered if they could vote, :p.
axefrog
Uncanny Valley doesn't apply. It would apply if they put a skin-and-fur-like shell over the body and tried to give it a life-like head, with eyes, nose, mouth and tongue, and it looked almost perfect, but not quite good enough to pass as the real thing; just enough to make it seem off the mark somehow. A white plastic shell hardly comes close; it would just provide a more attractive veneer.
cconcepts
I saw big dog in action way back when those videos came out. But for some reason, I find the confidence and obvious dexterity of this thing way creepier. Not to mention the fact that big dog had a noisy ol' two stroke engine so I could hear it coming - this bad boy could sneak up on me if I was sleeping soundly enough...
lispm
Interesting to see where they will be deployed first. Afghanistan or some inner city conflict in the US?
happyscrappy
Clearly they will be deployed by the EU against Greeks rioting because the retirement age was raised to 59.
desdiv
Didn't Google announce that they won't be taking any more DARPA contracts in the future? People were reading into that and suggesting that they were moving away from all military hardware all together.

That's what I was told anyways: https://news.ycombinator.com/item?id=8825795

EDIT: Better sources:

http://www.businessinsider.com/google-and-darpa-robotics-cha...

http://www.popsci.com/blog-network/zero-moment/google-rumore...

lispm
Does that mean Google will not SELL those robots to the army?

DARPA only FUNDED THE R&D for most robots from them. DARPA is a funcding organization for military research projects: Defense Advanced Research Projects Agency.

The real customers then will be military, police, homeland security and agencies active in special operations (CIA, ...).

From their homepage:

> Organizations worldwide, from DARPA, the US Army, Navy and Marine Corps to Sony Corporation turn to Boston Dynamics for advice and for help creating the most advanced robots on Earth.

So US Army, Navy and the Marine Corps are already giving money to Google for military robotics projects.

onewaystreet
Military use isn't as big a market as you think it is. iRobot is the largest supplier of robots to the US military but military sales only account for 10% of its revenue.
lispm
With automated cleaning systems.

Haven't seen any comparable offerings from Boston Dynamics, yet.

voxic11
He isn't talking about cleaning systems http://www.irobot.com/For-Defense-and-Security.aspx#PublicSa...
lispm
Yeah, that's defense and security. We know that they make money there.

iRobot makes most of their money right now with automated cleaning systems. Just look at their recent financial statement. But that's an offering which Google / Boston Dynamics does not have.

asdkl234890
Didn't Google announce that they won't be taking any more DARPA contracts in the future?

Yes, but they will also finish the current contracts.

usaphp
I don't understand what advantage does this bring over a small tank or something with caterpillar if they were used in a war? Caterpillar seems to be more effective than legs, especially in dirty environments of a war. Can somebody explain that to me?
InclinedPlane
Even with treads a vehicle that size couldn't handle the terrain that thing was going over.
usaphp
Also a simple mantrap would snap it's leg or at least make it impossible to walk further
rasz_pl
this is small tank in a bombed city block

https://www.youtube.com/watch?v=r9hWEFhIxLg

it has fire power, but ultimately gets stuck

ovulator
It can transverse very tight rugged terrain, like a bombed out building.
cygwin98
Wow, amazing! Though it does seem a bit creepy to me, as the stories in old sci-fi novels, where a big corporation X built and operated a robot army taking over the governments and ruling the world, tend to become reality at a increasingly fast pace.
Shivetya
Hit it with a car. Can it get up if knocked over? Fall down a stairs and get up? Can it walk on three legs? Just how resilient is it? I am impressed what it can do in the video but I would love to see how it recovers from other than very tame issues
anjc
> I would love to see how it recovers from other than very tame issues

You do know that these things don't pop out of the universe via magic right? Do you understand the engineering issues involved here to be judging the tameness of the tests? Why would they hit their expensive creation with a car?

onion2k
Regardless of how bad it is, it's probably a lot more resilient than a human in those situations.
Cthulhu_
Kick a human like that and he'll be like "OH MY GAWD WHY DID YOU KICK ME YOU NERD!". Robot be like "eh I'll get up and keep going"
repsilat
Hah, read the beginning of this article: http://www.washingtonpost.com/wp-dyn/content/article/2007/05...

It starts by talking about a minefield clearing robot, but the whole article's really good.

lez
a bow knot around one leg is enough to paralyze it, I suppose
sjtrny
> Can it get up if knocked over?

Yes https://www.youtube.com/watch?feature=player_detailpage&v=R7....

> Just how resilient is it?

Watch it on ice https://www.youtube.com/watch?feature=player_detailpage&v=cN...

I assumed the HN crowd had already seen all these videos.

circuitslave
They are very "dog" like. Maybe a new market will be small robotic pets you can upload your old pets mind download into to have "fluffy" with you forever - or until the maintenance contract is up anyway.
TeMPOraL
"Hi, my name is Blinky, and I just want to be your friend!"

http://vimeo.com/21216091

circuitslave
Thanks for the link!
95win
Can anyone point to refs that explain Boston Dynamics' approach to robot control and coordination? Seems to be a completely different method than something like Asimov. Any insight appreciated.
JoeAltmaier
This thing seems to walk like its blind - is that the case? It responds to the ground conditions AFTER taking a step and slipping, instead of by choosing a path carefully.
metaphorm
it has optical sensors but they don't point at the ground at its feet, they point ahead at the horizon. similarly, humans don't watch their feet as they walk, they look ahead at the horizon. making real-time adjustments to footing is a necessary part of walking. the sensors on the feet provide the data for these adjustments, but they are touch sensors, not optics.
aeturnum
Man, Boston Dynamics continues to impress. Their work has been fantastic to watch evolve. The endless applications of autonomous robots that can navigate almost any terrain should be obvious. Finding missing people, supply deliveries, searching for poachers, surveying, etc.

That said, I feel like I know what the next century's horseman of the apocalypse will look like. :)

raindrop777
Does anyone know what mapping software these guys are using (SLAM implementation perhaps?)?
akurilin
Can't have a real dog, but I'll sure go for one of these.
kenbellows
BD's 4-legged robots consistently fall just a little inside the uncanny valley[1] for me.

[1] http://en.wikipedia.org/wiki/Uncanny_valley

raldi
Actually, my reaction was, "Holy shit, robot quadrapeds have reached the uphill side of the uncanny valley."
Symmetry
It seems much less noisy than the Big Dog.
roadnottaken
How does it handle in the snow?
michaelt
As easily as you and I:-

Snow: http://youtu.be/W1czBcnX1Ww?t=1m12s

Ice: http://youtu.be/W1czBcnX1Ww?t=1m24s

wiineeth
I feel scared looking at them
wiineeth
I feel scared looking at them
nakedrobot2
Ok, it's time for another (perhaps timely and relevant) debate about whether AI is going to destroy us or if The Robots are going to take over.

Let me point to a great, great essay and debate that lays out lots of these arguments, and points out the fundamental mistakes that are being made when people bring up the fear over AI. Here is the most relevant quote, for me:

"let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything. Here I want to suggest a simple thought experiment of my own. There are so many technologies I could use for this, but just for a random one, let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.

I'm going to give you two scenarios. In one scenario, there's suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There's so many of them that it's hard to find all of them to shut it down, and there keep on being more and more of them. That's one scenario; it's a pretty ugly scenario.

There's another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?

The truth is that the part that causes the problem is the actuator. It's the interface to physicality. It's the fact that there's this little killer drone thing that's coming around. It's not so much whether it's a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there's enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive misdirection.

This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face."

http://edge.org/conversation/the-myth-of-ai#26019

imr_
What I really fear is not that the AI will take a control and starts to kill humans. I fear that we will trust AI to the point that when it will tell us to kill each other and we will follow.

Let AI decide whether to use solar enery or coal and when to switch. Let human decide about other human.

AI should be our compass on a starless night, not the captain.

thatsjustcrazy
This is exactly what TLA systems are designed to do. They called them signature strikes.
j0e1
Robots might just take the place of becoming man's best friend!
wahsd
So how many of you think you or your children will survive the massacre once AI robots make human work unnecessary for the ruling class to live a life of exploit.
JabavuAdams
Teach them how to build the robots and how to hack the robots, now.
bastih
Does anyone else feel creeped out by these robots? Put a flamethrower on and some intelligence into Spot, and have the intelligence decide that humans are bad.
akavel
Halfway through the movie, when suddenly there are two of them, now this was creepy to me... like if the first one suddenly cloned effortlessly... and now both are happily hopping along, never tired, we're just harmless, funny guys, you know, your best friends!... <shudder>
prawn
Climbing the hill with their eye crazily spinning around!
onion2k
No less scary than a human with a flamethrower deciding which humans are bad.

At least with the robot there's the potential for the excuse that it's gone wrong rather than wanting to kill people for whatever reason.

Shivetya
which is more frightening.

Oh, the robot did it. Great, like that is going to make it all okay. Fuck, drones have already allowed those in charge to excuse themselves from bad choices, now who are we going to blame? The programmer or the robot? I guess whomever ever has the least protection under the law, the robot certainly won't care

askmike
> Fuck, drones have already allowed those in charge to excuse themselves from bad choices

This has been happening since thousands of years before drones came along. Humans are used since the beginning of time..

1stop
We don't have any military autonomous drones... there is pilot pulling the trigger... so I don't understand your point.
Cthulhu_
Less scary than an automated or remote-controlled drone shooting a missile from a few kilometers away TBH. Easier to kill, too. Probably a lot cheaper to produce, too. And more effective.
hmottestad
Definitively creepy. I was thinking about the two legged robots in start wars, walker something or other.
jp_sc
It remind me at these enemies from Half Life 2 ep2: https://www.youtube.com/watch?v=mT-tV7zOEPU
m-app
It's just a walking robot. Drones have been here for a lot longer and already carry deadly weapons. Following your logic, you should've been scared and running for years already.
blechx
To be fair, if you ask people in Yemen, 'scared and running' is probably exactly to the point.

http://www.theguardian.com/world/2015/feb/10/drones-dream-ye...

derefr
Indeed; further, if we actually wanted something that could climb into your house and shoot you remotely without expending human life—we could have just stuck a turret to a cellphone to a police dog.

Robots are scary only because they allow for ubiquitous drone warfare—millions of drones just clogging the streets, or maybe small enough to hide in the shadows like stray pets. For targeted drone warfare, though, we've already had the technology for a long, long time.

jpindar
Police departments already use robots with cameras and guns. But because they're small and have tank treads rather than legs no one seems worried about them.
justinpaulson
The goal of a robot should never be to kill a human being. Robots are not human police that are in danger of death, and therefore should never be equipped with lethal rounds. The goal of a robotic soldier or police patrolman would only be to disarm and detain. A robot would not be making judgement decisions about life and death because it is not also trying to protect its own life like a human soldier/policeman. There is no reason a robot could not simply detain a criminal until humans could arrive and arrest the suspect, there is no need for a robot to engage in lethal combat. I think the fears people have about robots swarming war zones and murdering civilians should be quelled by laws which prevent the construction of robots with lethal ammunition and the accountability of those that deploy robots if a robot does in fact kill a human being.
jaegerpicker
That's not completely true though, a large part of police work is also preventing violence against citizens. What would the robot do if it arrived at a domestic violence call and needed to save a women/child from abuse? I could see a massive public outcry if the robot simply watched the crime take place.
justinpaulson
It would disarm and detain the assailant. The same thing a human police office ought to do in that situation. Are you trying to argue that it should just kill the abuser?
jaegerpicker
Certainly not in every case but there are a lot of times when disarming is not an option. I don't know if you have had any kind of defence training but I have and disarming an armed opponent is one of the hardest things to train for. More often than any one would like the police really have no choice.
netcan
If you watch this and your imagination doesn't run in the direction of evil robot army.. well, your imaginations doesn't work the same way as mine does.

I see a drone fleet of 100 bulky trucks galloping along contested highways towards a city boiling over with the violence of several simultaneous wars. They're protected by speed and dozens of armed UAVs. 96 make it through. Acceptable losses. The first truck to unload its four legged robotic soldiers loses 30 units to the welcome party, a quarter. By the time the tenth one is unloading, they're not losing any. The last robo-dog is unloaded just 18 minutes after the first truck arrives. By that time, there are over 10,000 dogs in squad-packs seeking targets from a database of 7,000 known enemy combatants and seeking control of strategically important sites.

there's something uncanny and creepy about how robots move once they've been riddled with armor piercing rounds. A leg stops working or a sensor gets damaged and it's impossible not to imagine that it's an injured animal in excruciating pain. The single minded resolve thug, that's all machine. If you're shooting off legs, you need to shoot off all four before these things will stop.

Any chance Boston Dynamics will adopt a don't be evil policy?

swamp40
...and as the last human alive scrambles up a steep hill, with 100 robo-dogs close behind, he stops and drops to his knees as he finds himself overlooking a steep cliff, with no where left to run.

"Why?", he looks up and asks the first robot, as it skids to a stop next to him.

One by one, a dozen other robo-dogs surround the human.

An armored plate drops open on the chest of the first robo-dog, revealing a small lcd screen.

And the above YouTube video begins to play.

The video stops after showing the callous human kicking Spot, the proto-ancestor of all the robo-dogs present on this hill.

The kicking scene begins to repeat on the screen, like an old gif.

And silently (for the robo-dogs were never given a true voice box), the leader lifts one robotic leg and with a single powerful pneumatic kick to his ribs, sends the last human alive flailing over the edge of the cliff.

catshirt
seriously, please stop kicking the robots
visarga
It was a balance test / demo, not just a "kick".
catshirt
tell that to the robots
bitwize
"Ng Security Industries Semi-Autonomous Guard Unit #A-367 lives in a pleasant black-and-white Metaverse where porterhouse steaks grow on trees, dangling at head level from low branches, and blood-drenched Frisbees fly through the crisp, cool air for no reason at all, until you catch them.

He has a little yard all to himself. It has a fence around it. He knows he can't jump over the fence. He's never actually tried to jump it, because he knows be can't. He doesn't go into the yard unless he has to. It's hot out there.

He has an important job: Protect the yard. Sometimes people come in and out of the yard. Most of the time, they are good people, and he doesn't bother them. He doesn't know why they are good people. He just knows it. Sometimes they are bad people, and he has to do bad things to them to make them go away. This is fitting and proper.

Out in the world beyond his yard, there are other yards with other doggies just like him. These aren't nasty dogs. They are all his friends.

The closest neighbor doggie is far away, farther than he can see. But he can hear this doggie bark sometimes, when a bad person approaches his yard. He can hear other neighbor doggies, too, a whole pack of them stretching off into the distance, in all directions. He belongs to a big pack of nice doggies.

He and the other nice doggies bark whenever a stranger comes into their yard, or even near it. The stranger doesn't hear him, but all the other doggies in the pack do. If they live nearby, they get excited. They wake up and get ready to do bad things to that stranger if he should try to come into their yard.

When a neighbor doggie barks at a stranger, pictures and sounds and smells come into his mind along with the bark. He suddenly knows what that stranger looks like. What he smells like. How he sounds. Then, if that stranger should come anywhere near his yard, he will recognize him. He will help spread the bark along to other nice doggies so that the entire pack can all be prepared to fight the stranger."

--Neal Stephenson, Snow Crash

toomuchtodo
This is the exact snippet I recalled when I watched the Spot video :)
tootie
Look at it another way. People involved in deadly combat have a kill or be killed mentality. Extreme caution for personal risk means you have to shoot first and ask questions later in a war zone. Not so for robots. Send them in to take prisoners. If they are destroyed, build another. Charge them until the enemy runs out of bullets. Send them close enough to use non-lethal rounds or wound legs and move on. And forget about looting and rape. Forget about collateral damage caused by bombing city blocks from the sky and hoping most inhabitants are bad. Imagine if we could shut down ISIS with $500M worth of material and no lives lost.
Lawtonfogle
Unless they are leaders or have some needed knowledge, the lives of 'the bad guy' will be valued at less than the cost of the robot. Ask Joe Taxpayer how many taxes s/he is willing to spend to capture alive a low level terrorist instead of killing them in combat, especially when we consider that any money spent on this could have been used to improve things at home.

$500M spent to take them down alive or $100M spent to take them down dead.

acadien
The other possibility is the irresponsible over use as with drones. If they aren't too expensive you could just strap a bomb on one of these and run it towards any possible threat. I agree with your assessment and look forward to the onset of more nonlethal warfare. I'm just saying the opposite effect is also possible.
icehawk219
We already have irresponsible over use of drones and so far all we have are ones that can fly around and drop bombs on people. And the government goes to great lengths to make sure people are totally emotionally uninvolved in the decision and execution. Once these end up in the military it'll only be a short while before they also end up on our police force. You think the cops are bad now? Wait until they've got a small army of drones to do their bidding. It's entirely possible that that never comes to pass but the US' current direction doesn't instil a ton of confidence.
fargolime
Nor does the US' past direction. Since the 1950s it has toppled two democracies and thwarted another. The (final) overthrow of its own democracy may become possible through such technology.
tootie
"You think the cops are bad now?"

I don't really. And the same benefits would apply. When cops stormed the room of Amadou Diallou and thought they saw a gun, their only option was shoot to kill. Send in a robot, and now you don't care.

mortenjorck
A single AGM-114 Hellfire missile costs over $100k, on top of the costs of arming and launching an aircraft for sortie. I have to think a Spot-derived, four-legged land-missile that can be launched from a truck would be price-competitive.
nogridbag
I would imagine tons of R&D was spent on creating a robot like this and I'm worried how easily it can be reverse engineered by well funded enemies.
pj_mukh
Agreed. The real problem is not on the existence of this technology (that was inevitable) but the misuse of it as other's pointed out. In a lot of countries (esp USA), civilians are effectively not allowed to put any limitations or guidelines on how the military or intelligence organizations use new technology. This is the actually alarming story.

If a piece of technology allows the military to capture instead of kill a supposed terrorist, will they do so? What is legally binding them to?

joshuapants
> (esp USA), civilians are effectively not allowed to put any limitations or guidelines on how the military or intelligence organizations use new technology

You do, of course, realize that the President is a civilian? And as Commander-in-Chief he absolutely does have the ability to put limitations or guidelines on how those groups use their equipment. And let's not forget congress, which is comprised entirely of civilians and could financially neuter military and intelligence programs if desired.

There are plenty of countries where there is no civilian oversight for the military, but the US is not one of them.

> If a piece of technology allows the military to capture instead of kill a supposed terrorist, will they do so? What is legally binding them to?

That's a good question. I think the preference would always be to capture if there is a possibility of gaining intelligence from the captive. If you were interested in bargaining with adversaries, it would be wise to capture at least some of them (prisoner exchanges, that sort of thing). However, if the intelligence gain would be minimal and you already have a stable of bargaining chips, it might be worth more to have a guarantee that this particular terrorist won't be in the fight any longer.

As to what's legally binding, the 1907 Hague Convention says that it is forbidden "to declare that no quarter will be given." This would suggest that surrenders from any lawful combatant would have to be accepted. To take the current example, I do not think ISIL fighters would be considered "lawful combatants" primarily because they do not respect the international laws.

pj_mukh
I was referring to these kinda moves: http://benswann.com/us-moves-to-classify-afghan-military-ove...

As a creator of this kind of technology, handing it over to agencies that are constantly battling all levels of oversight seems sketchy to me. I would understand why some people would want to ban this technology outright as an overreaction. Instead, maybe we should try and enforce controlled civilian oversight.

But yes, I am not expert on the legalities of oversight or the treatment of captured terrorists

joshuapants
I see what you mean.

I guess nobody's really an expert on that. I guess there's some precedence going back to the golden age of piracy, but then again most of those policies would have predated many instances of international law. I wouldn't be surprised if there's a dozen JAGs working out exactly what the US's policies should be right this instant (if they haven't already).

kbenson
I think this is a really important discussion, and there are a lot of different aspects, making a simple solution hard. If we always attempt to capture instead of kill terrorists, and then we have to detain them, does that cause more or less terrorists in the future? Or do we capture, put on trial, and possibly execute (and if so, what happens if we pass legislation to ban executions). Or, as we generally do now, do we just kill known terrorists?

After you step beyond what's humane for the person, the question of what's humane for society looms (and at that point, we have to consider who's society we are talking about). It's obviously more humane for the individual if we capture instead of kill outright, but if that's noticeably worse for society through negative externalities (I'm not trying to assume, just posing the question), then is it better or worse?

pjc50
database of 7,000 known enemy combatants

The database is a more worrying prospect than the robots, in a lot of ways. Who gets to designate "known" and "enemy"? What if it's the Tinder eigenfaces program? Automatable ethnic cleansing?

Then what of the occupation? Robots aren't great for political legitimacy. Do you have them return fire on the kids throwing rocks at them? Unlike human guards I suppose they can sustain IED losses forever. But they're an A1 prime target for hackers...

yellowapple
> What if it's the Tinder eigenfaces program?

On the other hand, dispatching BigDogs and Spots to go on dates as proxies would be an interesting prospect.

Florin_Andrei
Since WWI / WWII, the fate of wars has been decided by the size and complexity of industrial output, instead of the strength and bravery of men.

This is just goes further in the same direction.

adventured
Other than industrial output, politics is still (and will always be) one of the largest determining factors.

Politics can take a vastly superior force and render their outcomes impotent on a battlefield.

None
None
scotty79
What I'm missing in your vision is dozen of short range microdrones with needles covered in neurotoxin taking off of each of the dogs backs.
lolpep8
Those would be chemical weapons. Those can't be used, because they are inhumane.
scotty79
Not really. Chemical weapons are applied unselectively and have a great potential of using against civilians. I don't think many people would mind guided poison arrows.

But I appreciate your irony. :-)

andybak
I find Churchill's words to be fascinating when read closely: https://en.wikipedia.org/wiki/Alleged_British_use_of_chemica...

The wording is callous and shocking to my modern ears but - if you start with the premise that you are already in a violent conflict (I'm not addressing the issue of colonialism here - just the debate about the comparative ethics of bullets and explosives vs chemicals) then this sentence stands out: "It is sheer affectation to lacerate a man with the poisonous fragment of a bursting shell and to boggle at making his eyes water by means of lachrymatory gas."

We have some ridiculous Hollywood-fed beliefs that bullets and explosions create clean deaths and painless injuries.

btbuildem
Yeah, there should be an equal amount of parallel effort put into developing technologies that would allow us to destroy these machines and permanently disable their subsystems. Today, politicians still have to convince people to commit industrialized murder - tomorrow, the robots will obey orders without delay.
None
None
None
None
joubert
Google owns Boston Dynamics
izak30
Not that I'm on the dystopian future train, but "Don't be Evil" isn't a Google policy.
Thrymr
Yes, it is: http://investor.google.com/corporate/code-of-conduct.html
Florin_Andrei
I don't know if this is such a huge game changer for war. I mean, we had wheels for solid ground, and machines that can fly through the air, and boats for water, and hovercrafts for ambiguous terrain. These cute dogs are ultimately not that revolutionary.

I am much, much more worried about evil applications of multirotor drones.

Anyway, all of the above is an exercise in futility without decent AI. The robot dogs can run - so what? As long as they're pretty dumb they can't do much damage.

Now, an AI running a dog chassis, or flying a quadcopter, that's an "interesting" thought.

bitwize
Wheels are notoriously poor at navigating rough terrain; that's why we build roads. We could build a vehicle with wheels or treads so huge it effectively flattens anything in its way, and we do -- but for some missions (support for a squad of soldiers on foot, say) it makes sense and is much cheaper to build a small legged vehicle.
BatFastard
Is it possible that these could make roads obsolete? Now that is an interesting concept.
bitwize
Probably not because wheels are VERY efficient at transport across smooth level terrain, and a wheeled vehicle is less easy to destabilize than a legged vehicle. So when deciding between wheels vs. legs, it will be the usual sort of engineering tradeoff: which one offers the greatest advantage for your application?
riggins
Just reading Bing West's A Million Steps, so that obviously is affecting me, but the first thing I thought was this would be great for saving soldiers from getting blown up by IEDs.
None
None
jimmytucson

    > Any chance Boston Dynamics will adopt a don't be evil policy?
Actually, Boston Dynamics is a wholly owned subsidiary of Google Inc.
jobu
It is creepy as hell to watch them walk, and that makes me wonder what groups like ISIL would think about being hunted by a small pack of them.

Right now dropping bombs on houses kills some terrorists, but the collateral damage has a side-effect of bringing more to their cause. Would attack robots scare the shit out of them and make them stop, or just backfire and end up as another recruiting tool for fundamentalists?

Jack000
I imagine swarms of quadcopters would be better for that task. Less chance of being blocked by physical barriers and lower cost = more units. They're so cheap you wouldn't even need rounds, just pack on some explosives and self destruct when near.
superuser2
... for 3 minutes, until the batteries give out and they all need to be charged for 5-6 hours.

(But actually, what is the battery life on these things? You still have the problem of heavier battery = more drain on battery. I wonder where the break-even point is?)

toomuchtodo
They're hybrids. Powered by batteries recharged by small ICE/turbine engines.
geon
Whoa, that's dark.

I was just watching this https://www.youtube.com/watch?v=VXJZVZFRFJc for the n:th time. It's hilarious.

snissn
http://www.newsweek.com/assange-google-not-what-it-seems-279...
lkbm
I'm generally not a fan of Newsweek, but now I definitely want to read that book. Thanks for posting this.
notatoad
yeah, i've always kind of scoffed at the people saying boston dynamics robots are scary. They're an amazing technical accomplishment and they make me really excited and happy to see them.

but as soon as i saw that shot of two robot dogs side-by-side, moving in almost synchronicity, something turned in the pit of my stomach. There's something very dystopian sci-fi about that image.

legohead
Someone sets off a large EMP and all the dogs suddenly freeze, and fall over.
jjoonathan
...and the next version gets an extra $5 of shielding per dog* and so is made completely immune to even a H-bomb EMP.

* at a $5000 markup, but who's counting?

ChuckMcM
Or the dogs are encased in a rectifying antennas and they say "yumm! recharge time!" The whole point of weaponizing something is to make it resistant to the more obvious attack strategies :-).
asdkl234890
Any chance Boston Dynamics will adopt a don't be evil policy?

They are owned by Google now. But let's not pretend Google's don't be evil policy makes a big difference these days.

kmfrk
They won't stalk you, if you check the opt-out field.
zachrose
Though of course it will be an opt-in field that's pre-checked and surrounded by opaque verbiage.
rasz_pl
'don't be evil to me, or else'
ChuckMcM
That is a compelling visual. Another one might be that the truck drops off its dogs carrying first aid supplies and disperse though out the city looking for victims of some disaster. Human instinct is to go with threat first, friend later, I expect that served us well during our cave years.

Humans have had a standoff attack capability for a while, and yet the number of deaths due to "war" has been going down for a while.(I'll admit that I'm not sure what to call the ISIL thing.) So where does that leave us? In a place where managing outbreaks with the lowest possible loss of life to non-combatants. And if you believe the narrative that the police are shooting people because they 'fear for officer safety' then you can certainly make the argument that an officer that is on site in a teleoperated way has no personal risk and should therefore not shoot anyone or anything except to save the life or lives of innocent civilians. But we all know that is impractical in a non-war situation.

alexqgb
The terror in the first scenario is not a product of the robots themselves, but the existence of a democratically unrestrained power in full command of the machines.

I hate to sound so single minded, but this is just one more reason to opposed gerrymandering, closed primaries, restricted access to the polls, private campaign finance, and the revolving door between government regulators and their charges in private industry. Every one of these acts as a wedge between government power and accountability to the people. Individually, they're bad. When they all start working together, the rot really starts to accelerate.

By the time the RoboCops are announcing that you've got 20 seconds to comply, it's because we've lost any way to dislodge their operators.

angersock
Still, the problem with having occupation-style technology and drones is that it further spreads the power of those with capital at the expense of those without. The current class setup prevents truly depraved and evil systemic evil (beyond a point) from occurring, because officers and soldiers are human and will blatantly refuse to do things that are Evil (perpetually).

There is no such guarantee with drones--in fact, to refuse orders is a design flaw.

I can't help but imagine some descendant of these things, in the mode of one of Bradbury's mechanical dogs, attacks a throng of protestors.

icehawk219
In my mind this sentiment can be summed up as: "The more humane we make war, the more of it we'll have". And the simple reason is ... why not? When _we_ no longer suffer casualties why should we care about sending a pack of mechanical dogs into a village somewere and just let them run wild without a care in the world for the casualties we cause to the innocent? And before saying that that won't happen just look at current drone warfare. There isn't a person in this country (US) who truly cares about all the collateral damage of our drones. It seems foolish to think these will be any different.
jjoonathan
> There isn't a person in this country (US) who truly cares about all the collateral damage of our drones.

I care. As do tens to hundreds of millions of other US citizens. Tell me, what we were supposed to do about it again (that we didn't do)?

jdietrich
March on Washington. Picket the manufacturers. Organise a boycott of related companies. Chain yourself to the gates of drone control bases. Something, anything.

There is no mass movement against drones. Look at protests against nuclear weapons during the cold war. Large, well-organised groups led a groundswell of opposition. The Greenham Common peace camp was continually occupied for eighteen years until the nuclear weapons based there were removed; The Faslane peace camp is still occupied today. In 1982, a million people gathered in New York to oppose nuclear weapons.

You care, as do millions of Americans, but only to the extent that you don't actually have to do anything.

ChuckMcM
Except that this statement: There isn't a person in this country (US) who truly cares about all the collateral damage of our drones. is demonstrably false.

I presume you were employing hyperbole but it is important to note the difference between waging war in Afghanistan using drones versus waging war in Iraq using cluster bombs. The latter case has a much higher non-combatant casualty rate. Further, the more accurate such munitions become the easier it is for non-combatants to avoid areas where they are likely to be killed or injured.

I have yet to meet anyone in the US military who considers warfare "humane", and while I would not be surprised if such people existed, it has not been my experience that they are in positions of authority.

astine
Still, the problem with having occupation-style technology and drones is that it further spreads the power of those with capital at the expense of those without.

Are we certain that this is a bad thing. I personally like that it requires the resources of a nation-state to field a major military. The whole 'democratization of war' that happened during the twentieth century is arguably the reason why we see constant, brutal, civil war certain parts of the world. If we could undo the invention of the Kalashnikov the world would be so much better.

BatFastard
What Kalashnikov designed was powered by inevitability. If he hadn't done it someone else would. Just like with the robots, someone somewhere WILL make them. I do fear the ability that these give leaders to wage war with no cost other then money.
soperj
If these existed in the late 1700s, no way that US colonists can declare independence.
darkmighty
The point stands that as you diminish the number of people needed to make aggressive decisions those scenarios can become more unstable / risky (as in risk of unethical orders). It's the same case as with nuclear weapons. But it's not an insurmountable problem: simply delegating the decision to more and/or better prepared individuals is the way to go.

The dangers and benefits in fact are strikingly analogous to nuclear weapons technology: the power will eventually overwhelm conventional warfare and major nations would probably only fight proxy wars up with robots, and stop immediately when the robotic resources depleted to prevent assured destruction. It could create the sort of calm that nuclear weapons bring. And precautions must be assigned the same way (multi-person authorization, strict safety, etc).

Kalium
I submit that the purpose and function of any/all forms of technology is to enhance the abilities of those with it and not those without. This is not a feature of any given flavor of technology, but rather of technology in general.

In any kind of competitive context, this will implicitly elevate those with technology over those without. Generally. Absent a Arthur C Clarke "Superiority" sort of situation.

ChuckMcM
Radio collars for endangered species?
Kalium
Used to extend the power of people tracking them, usually to the detriment of people who want to do something with that land.
ChuckMcM
I do not disagree, my hope is that, as with other technologies, especially dual use ones, we will be able to keep them biased toward the positive uses.

Robotics, like genetic engineering, nano-technology, and data mining can do good things and bad things. It is important to focus on the good things, remain cognizant of the bad things, and maximize the value. I would be sad to see robotic quadrapeds "banned for civilian use" because they can be weaponized, because they can also get to people in need in dangerous and hard to reach places.

In a more current events sort of way, I am in favor of severe punishments for people who weaponize drones and fly them in public places, or people who manufacture and sell such drones, but I am not in favor of banning personal ownership of drones. I am willing to risk that someone will show up where I am with such a device, and the risks to my personal safety if they employ it in a deadly way, in exchange for the freedom to own and experiment with drones in a responsible way.

markdown
> we will be able to keep them biased toward the positive uses.

Iron Man II (2010). How you intend to use them is irrelevant if all it takes is one guy on the dev team (or a compromised janitor) can take control of the entire army by dropping in some code.

BrainInAJar
I'm less worried about a rogue staffer than I am about the systemic use by large corporations to further their interests (it wasn't that long ago before corporate-employed thugs were breaking the heads of strikers.)
ethbro
> Robotics, like genetic engineering, nano-technology, and data mining can do good things and bad things.

How's that going with arguably the most advanced and widely available of those (data mining)?

To expand: I'd offer that capital tends towards amorality. Why? Because amorality is more profitable.

ChuckMcM
I don't buy that argument. Criminal capital (if such a term exists, which would be defined as capital that is amoral in origin) tends toward profit over morality. To the extent that technology is exploitable with a small amount of capital[1] the likelyhood of it being so exploited increases.

We've seen drug lords building submarines to transport drugs up the coast, but the economic cost of really effective submarines is still too high relative to the profit such devices provide. And perhaps more importantly the economic risk of 'losing' a submarine such that its life time value exceeds its cost to build and deploy.

Things like DNA printers worry me for example much more than robot dogs.

[1] The same cost reductions that make fielding a web server $5/month enable large scale data mining for very little investment in cash.

samatman
Both parent comment and Chuck's reply appear to contain an important semantic error, using amoral to mean immoral. Corporations tend towards amorality, which is a superset of both morality and immorality, because this is more profitable. I believe Chuck is hoping we can keep morality more profitable, on the whole, than immorality. I am less sanguine.
ethbro
Fwiw, I specifically used amoral.

I don't consider most things capital does to be immoral. In an optimally run profit-generating business, the course that produces the most profit will be pursued. That's basically a tautology, and cares nothing about morality at all!

Feb 10, 2015 · 12 points, 0 comments · submitted by suprgeek
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.