HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Marc Raibert: Boston Dynamics | MIT Artificial Intelligence (AI)

Lex Fridman · Youtube · 222 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Lex Fridman's video "Marc Raibert: Boston Dynamics | MIT Artificial Intelligence (AI)".
Youtube Summary
This is a talk by Marc Raibert for course 6.S099: Artificial General Intelligence. He is the CEO of Boston Dynamics. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.

Note: Due to technical difficulties, we don't have a screencast of the slides, and the video of the slides is low resolution. Despite this, I chose to include several parts of the talk that show slides, especially with videos. It's not optimal, but I hope you learn and enjoy anyway. Thanks for understanding. We're always learning and improving.

Note 2: Marc and I asked people not to release the videos around the 20 minute mark before they are officially released by Boston Dynamics. They have been now, go check them out: https://www.youtube.com/user/BostonDynamics

OUTLINE:
0:00 - Introduction
1:06 - Slides
24:27 - Demo
33:40 - Q&A

INFO:
Course website: https://agi.mit.edu
Contact: [email protected]
Playlist: http://bit.ly/2EcbaKf

CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- AI Podcast: https://lexfridman.com/ai/
- Show your support: https://www.patreon.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Twitter: https://twitter.com/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Slack: https://deep-mit-slack.herokuapp.com
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I am a huge fan of spartan-like no-bullshit brutalist approach towards their marketing/PR:

  - No studio setup, just their lab
  - No artificial lighting
  - No fucks given about hiding the mess
  - Robots going through surgery in the background
  - Song is classy af IMO
  - Can see the camera man
  - Can see the engineers/employees
  - Framing/cinematography isn't as polished (this is a feature, not a bug)
  - Chipped paint, beaten up fairings
Generally, I love BD Branding/Marketing/PR:

  - Logo is on point. It's Da Vinci level stuff. Humanist 
    It is totally opposite of what they build, such an amazing contrast and deeply thought out [1]
  - One or two videos per year
  - No twitter drama like a lot of SV companies (Hey, Elon)
  - No excessive trying to get their word out. Their products speak for themselves
  - No purple/gradient bullshit and design trends
  - Their job descriptions actually make sense of what they need from you
  - Their CEO, Marc Raibert, exudes wisdom, calm and has a persona of composure (seriously, watch his interviews) [2]
  - No AI hype
Literally the opposite of Silicon Valley. May be I've had too much coffee and making lofty generalizations, but this is what I see! I also see the DNA of East coast/MIT/Boston rigor vs Silicon Valley cowboys - thoughts? :-) I live and work in SV btw.

I can't get over how amazing this video is. I hope it takes over the top spot on Youtube. It's got everything - music/tech/memes. The people behind BD marketing are geniuses IMO, although building an explosively-cool tech that appeals to all humans helps a lot. Outstanding.

[1] https://www.bostondynamics.com/

[2] https://www.youtube.com/watch?v=LiNSPRKHyvo

lacker
It's a cool ad, but it doesn't seem to be "spartan / brutalist" marketing to me. The point of the ad is to show you how their technology makes robots dance. But, clearly their real product isn't just dancing robots. They put a lot of work into making this functionality, and all that work is just for marketing!

To me, a real spartan approach to marketing is spending very little money on marketing, and just focusing on your product. That seems like the opposite of what Boston Dynamic is doing, because they put so much work specifically into creating these PR/marketing videos. Many of their robot models don't seem to exist for actual products, they just exist for marketing.

cududa
I don’t think you properly understand “spartan”/ brutalist. The video is using the materials in their most outrageous but possible form to demonstrate their flexibility, and executed in an imposing way. How’s that not brutalist?
lacker
But so much of the video is unnecessary flourishes, lots of excessive effort put in to make it a cool video, rather than focusing on the robot itself. It seems more baroque than brutalist. I'm sure it's a matter of personal aesthetic judgment. But I would consider robot videos like this one to be "spartan":

https://www.youtube.com/watch?v=dZMtKQ-R9II&feature=emb_logo

Go 15-30 seconds in. The video is obviously just showing an industrial robot at work. They didn't build something fancy just to make a viral video. It's just focusing on the components of the robot that do something useful.

avmich
> Many of their robot models don't seem to exist for actual products, they just exist for marketing.

Wow :) I was under impression that, while Atlas is a flagship model with broad purposes, the other two are rather optimized transporting (and some other miscellaneous functions) robots, for specific applications.

Why do you think they couldn't find use as products?

high_byte
I think most of their robots are more research than product, except maybe most recent models like spot. It's first and foremost an amazing engineering for the sake of engineering.
2muchcoffeeman
BD is basic research. Figuring out what can be done and what the potential applications are. I’m surprised BD has gone through so many sales. Whoever owns them in the end is going to make a ton of money I reckon.
brmgb
In the case of BD, the question seems to be what exactly is the end. Making walking robots is an incredible feet of engineering but one that doesn't seem to solve an actual industrial problem.

For a lot of use, as a mean of locomotion, they are fighting not only tracks but also flying now that multicopters are ubiquitous. Is there still a market for them? Cool research anyway.

2muchcoffeeman
Robotics already has plenty of applications. Travelling over dangerous terrain for mapping, search and rescue, automation of manufacturing where dexterity is required, lifting and manoeuvring heavy loads. Those few applications alone covers a lot of industries.

But you might also learn about how to build prosthetics, augment human abilities with exoskeletons. Who knows.

These guys will figure out all the obvious stuff and perfect it. And then see how far they can push it.

neilpanchal
One thing that's different is that BD doesn't have the pressure to sell to the masses, and they've been traded off from one giant to another (Google, Softbank and now Hyundai). I think they can afford to be spartan and truthful, I suspect a lot of big-corp marketing is based on metrics and data which drives you to doing unspartan things. My guess anyways, perhaps others can shed light.
TaylorAlexander
It’s all very cool looking.

However when I watched this video my first thought was “my god they’re going to bring the drone wars to the ground.” I am a robotics engineer and have been focused on robotics my whole life. I worked at Google X on a robotics team with some great ex-Boston Dynamics people, and some of my friends have since gone to work for them.

So I do understand what they have achieved. It’s fantastic from a technological perspective. And I know they have non military funders now. There’s a great many labor applications for machines like this if they can sell them for under half a million dollars and get them to do real work. I understand all that.

But I fear that if the US military orders some, the company would happily oblige. Maybe they’ve committed to only peaceful non military use. But these extremely agile and capable machines to me seem like some military strategists dream come true. So when I watch all of this achievement from a company that for so long survived off of military funding, I am deeply saddened by the implications.

Delivery robots and warehouse workers would be great. But these seem like in ten or 20 years time they’ll be another extension of the US global killing machine. I dream of a world where we can use robotics to make the world better, and the potential military use here leaves me uneasy. I cannot celebrate what I fear will be the dawn of ever more inhuman military conflict.

pjc50
I was under the impression that the military were the main funders and intended clients; the robo mule in particular for offloading some of the increasingly heavy tech and supplies for infantry.

What happens in 20+ years when surplus ones are handed out like MRAPs to careless police departments?

TaylorAlexander
The military has been the main funder, but as of late they've been purchased by non-military groups and have suggested their products will have use in warehousing and package delivery. I am being optimistic to hope they wouldn't see military use, but I think it'd worth discussing in any case.

> What happens in 20+ years when surplus ones are handed out like MRAPs to careless police departments?

Good question indeed.

newsbinator
This was my first thought as well. Watching them dance, I was thinking, non-jokingly “if they can do this now, what happens when these robots are dancing faster than the human eye can see?”
EvanAnderson
Makes me think of scaling-up the techniques in this old video on a larger platform: https://www.youtube.com/watch?v=-KxjVlaLBmk
TeMPOraL
What happens if the robots can dance faster than human eye can see and can see your eyes in detail? Invisible robots - exploiting saccades to move around unnoticed.

See also: Blindsight, by Peter Watts.

nopzor
The Rat Thing has stopped. Which they never do. It's part of their mystery that you never get to see them, they move so fast. No one knows what they look like.
howlgarnish
(Snow Crash, by Neil Stephenson)
atmosx
I don’t have anything to do with robotics and this was my first thought: fear. I am afraid these will be deployed as war machines to kill people instead of being deployed to save people. I wish to be wrong.
fermienrico
It's weird to pick one point in the abstraction chain of human technological progress and create a threshold of unethical behavior.

Technological progress will gaurantee this scenario of military use. If not BD, it will be another company. If not now, it will. If not the US, it will be someone else - China, Russia, etc.

I don't see anypoint in feeling sad about just some geeks making robots as much as we didn't become sad when we saw the transistor being invented. The future is in our hands and tech has always been at the center of military warfare. Focus should be on government policy, electing good leaders and engaging in diplomacy over warfare.

kristopolous
ah the " If not us, it will be someone else " argument.

Let's replace that with other things to show how incorrect this logic is.

"If we don't enslave these people, someone else will"

"If we don't bomb these schools, by golly, someone else will!"

The potential hypothetical existence of others acting immorally simply isn't a valid ethical argument to act immorally.

In the same way the response is "how about if instead nobody bombs the hospital" the response here is "how about if instead nobody builds armies of killer robots"

"no death cyborgs" sounds like a pretty easy ask for humanity. This should be well within reach.

fermienrico
That’s not even a remotely close analogy, no offense meant. Robots can be used for many good purposes. Intel makes processors, they can be riding on a missile or used on a CT Scan. Bombing schools is only horrific, how do you come up with this?

I kind of wish we would just enjoy this stupid video of robots dancing. It’s getting tiring to fend off AI-doomsday crowd.

kristopolous
The statement I responded to specifically focused on military application

"Technological progress will gaurantee this scenario of >> military use << . If not BD, it will be another company. If not now, it will. >> If not the US, it will be someone else << "

This argument is

"Technological progress will (always) lead to military use. If not us, it will be someone else"

or:

"The future is guaranteed to be full of war because of decisions made by humans that they somehow have no agency over. Because of this false premise, we need to be eagerly building weapons as fast as possible"

It's a classic argument and it stands up to no scrutiny whatsoever.

So instead the standard response is to generalize it through a deflection: "this is just general forward motion progress" which is exactly what happened.

That's not the point. It's the idea that the fundamental inescapable nature of humans is to be as violent and brutal as possible - which is not true - it's a choice, an act of agency, a matter of policy, it's a decision that is freely made.

Just getting to that simple realization, that barbaric brutal self-destruction takes choices, planning and intentional action that we can simply just choose not to do, would be a groundbreaking epiphany to most.

I'm pretty confident I'm going to die without murdering anyone, just like almost every human ever. It's not natural in the slightest.

atmosx
“I am become death, the destroyer of worlds.“ - J. Robert Oppenheimer
TaylorAlexander
I think it’s important to talk about the concerns related to military robots well before they reach deployment so the public is primed to criticize leaders who would claim their use will be justified. When should we talk about it if not on the release of amazing new capabilities sure to catch the eye of military officials?
Kiro
> I hope it takes over the top spot on Youtube.

It won't because for non-techies it's just a random video without any significance. I tried showing it to my girlfriend twice and she was extremely unimpressed and uninterested.

skinkestek
Just forwarded it to my wife, my daughters, my brothers and brothers in law.

My guess is a few of them will like it.

pierrec
>It won't

Well, looks like it did.

high_byte
Sorry for your failed relationship and also I disagree because their backflip video was a hit.
Nicksil
> It won't because for non-techies it's just a random video without any significance.

What a bizarre thing to say. Surely "non-techies" are able to extract some value from the video.

> I tried showing it to my girlfriend twice and she was extremely unimpressed and uninterested.

Oh, well, if your girlfriend was unimpressed and uninterested, we'll just extrapolate from that and presume the same for everyone else (at least everyone else's girlfriend).

mkl
>> It won't because for non-techies it's just a random video without any significance.

> What a bizarre thing to say. Surely "non-techies" are able to extract some value from the video.

Movies and TV have stolen most of that value and wonder by showing fake robots moving like this for years. My thought watching the video was "this looks like CGI", not because I think it is CGI (or the lighting or background or anything), but because the only robots I have seen move that fluidly have been fake computer animated ones. Because I believe it's not CGI, I was amazed, but that takes a little domain knowledge.

I suspect a lot of people already think robots moving this well are fairly normal, and not impressive.

the_cat_kittles
this is maybe a mean sounding comment, but its what i would have appreciated hearing in response to my own thinking along your lines- you sound like a total mark. i have this single comment to go on, so obviously im extrapolating. but don't let a company's marketing ever fool you into thinking you know anything about them.
cma
They had pretty heavy studio lighting in this:

https://youtu.be/wlkCQXHEgjA

platelets2020
Heavy study lighting? That's very clearly CGI.

I bet you believe all the vehicles racing around in automobile TV ads are real too.

gamblor956
They are real cars. You can watch them filming those commercials all the time in LA.
cma
His point as to whether the Boston Dynamics video I linked is CG is irrelevant (if it is or isn't, it still isn't raw production like OP was praising), but there are plenty of car commercials using cg:

https://www.newsweek.com/2017/02/17/blackbird-can-be-any-car...

kortilla
All of this is trivially explained by the fact that they are not a consumer oriented company so they don’t have to sell hype. Their customers aren’t average people chasing trends and reading Twitter. Nearly every b2b business markets in a similar way.

When you’re competing in a vicious field for consumer mindshare it leads to the marketing and self aggrandizing because you’re literally trying to capture the attention of stupid people.

high_byte
And maybe even more because they didn't even have customers at all until recently. In fact nobody wanted to even buy their business. Who's laughing now? (probably the robots)
neilpanchal
100%. Metrics/data driven marketing for the masses leads to a lot of unspartan marketing. I can think of few brutalist marketing/branding - Berkshire Hathaway (https://www.berkshirehathaway.com/) is one, although their product is abstract - financial services. Any others you can cite?

Edit: MIT branding is pretty cool too: https://web.mit.edu/ and always have been, although this is B2C.

uniqueid

  Framing/cinematography isn't as polished (this 
  is a feature, not a bug)
Cut to the person who slaved over this shoot, banging their head against their desk.
high_byte
Thank you very much, love this comment. Watching Marc right now!
kostarelo
It doesn’t look like they’re marketing their brand to me, more like they’re having fun.
platelets2020
Impressive rant, but you realize this ad is CGI right? The opposite of everything you're praising.

Or do you believe the cars in car ads are real too?

[0] https://www.businessinsider.com/cinematographer-reveals-one-...

gamblor956
You understand that CGI is only used for the very high end cars?

That comes from the article you cited...

neilpanchal
This is the best compliment for BD.
avmich
Yeah, flattering and also annoying and sad. Marvin (HHGG) definitely had a point, we might see that rather soon.
tdaltonc
You are the target audience for this marketing. It was made for you.

This is recruitment marketing. The goal isn't to sell the robots, it to sell the experience of working on the robots so that they have basically every need on earth lusting to work there.

devchix
> The goal isn't to sell the robots

Their website has a tab called "Shop". I clicked, thinking one could buy t-shirts, perhaps a Spot plushie, but for $75K and change you can get your own robot dog. The "Buy Now" button is kinda crazy, like some user just happened on the site and Add-to-cart and in a week one could have this delivered in a box. It's only a smidge more than the Mac Pro.

kortex
My first thought was, "it sure would be neat to work for BD".

Even though I'm pretty sure engineers have been driven to tears trying to get a robot to do the mashed potato. Worth it.

Nerd lust: confirmed.

Apr 02, 2018 · 222 points, 56 comments · submitted by AlanTuring
suyash
Thanks for sharing this wonderful lecture. I would like to know why this guy's mission is to have Robotic Intelligence >= Human Intelligence. What is his reason behind that, he did not divulge. Any guesses?
dwc
Pretty much every conception of AGI is of greater than human intelligence, at least in some aspects.

The reason? Otherwise why build it? Yes, there are cases where AGI as subhuman-level intelligence would have utility, but the remaining cases are far more numerous and have far more utility.

fossuser
Not sure why you're being downvoted for bringing up a valid point. Though I'd argue the reason is more that stopping at human level is a hard optimization to hit.

Once there's a way to train a general intelligence it's likely hard to contain the improvement - that's why the goal alignment problem is important. Even the current human architecture running at a billion operations per second instead of its current 100 or so would be a problem.

Human general intelligence is constrained by things like power consumption available via eating or needing a head that can fit out of a birth canal - AGI constraints are a lot less limiting and AGI can improve faster.

goatlover
Also by all the other human beings placing limits on what any individual, however smart, can do (i.e. taking over the world). A superhuman AI is going to come into a world full of existing intelligences, some of them machine which may be close to super intelligent already, along with the billions of humans and all their organizations.
fossuser
I think the existing intelligences and organizations won't matter, the super intelligence will be a step change like the difference between a human and an ant. If the goals are not aligned up front it'll be a problem.

To put it into perspective if you can run the human architecture on an AGI at a billion operations per second it's like compressing a historical human civilization of learning into a couple hours.

Maybe for some reason this learning process will be slow or maybe there's some reason why it can't scale up quickly, but based on the learning speed in narrow areas that seems unlikely.

There's also the fact that natural selection which is a relatively brute force sexual selection mechanism still results in general problem solving brains being everywhere - it doesn't seem like something particularly rare.

goatlover
> run the human architecture on an AGI at a billion operations per second

Maybe a single human, but can you run an entire world civilization and it's environment (basically Earth) in a sped up manner? Because one super fast human-level intelligence is still constrained in a way that all of civilization is not. What is one human sped up a billion times? Is that 1 billion humans? We already have several of those units operating around the clock. And those units have access to the world's resources, whereas one sped-up mind will not, unless it can gain control of the world.

fossuser
I said human architecture - so human like ability to solve general problems sped up without the other human constraints (like needing to eat) and focused on solving a problem. It isn't like a billion independent humans just doing random things - maybe if they could all focus and coordinate on a single goal but the communication overhead from that would still make it different in addition to the other biological constraints.

You may not need that much access to the world to learn/infer a lot about it [1] and if a superintelligent AI needed access to more in order to achieve its goal it'd probably be able to get it.

[1] https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

gowld
There are multiple dimensions to scaling. A large number of subhuman-intelligent agents can beat a small number of superhuman-intelligences (and vice versa, depending on specifics), and certainly beats not building anything.
goatlover
> but the remaining cases are far more numerous and have far more utility

But why do we want to replace ourselves? Do we just want the machines to do everything for us? Take care of us like we're pets?

None
None
Daemon_Eater
Different people have different priorities, some subset of people must desire to be taken care of like pets. That said, the relationship does not have to have a master/pet dynamic to it. There are alternatives where ASI takes specific roles away from humanity (like various governmental, administrative, or legal roles), especially those roles where humans consistently fall short and failure has very negative consequences. The driving motivations for ASIs will determine what kind of relationship we have with them. They may all have the potential to lead to mass subjugation but, that isn't set in stone, we could end up as partners in a shared future.

I for one hope they can fortify my deep solipsism with more detail, my own personal paradise Matrix. Or maybe they can just take care of me while I waste away on drugs, ensuring no negative externalities.

blhack
There is a value to human life beyond just "how smart you are".

>Otherwise why build it?

Because it would be better to send robots into dangerous situations that it is to send human beings.

dwc
This is why I mention that there are specific cases for subhuman levels of intelligence.
derefr
A target of exactly-human intelligence is potentially useful for automating All The Jobs and entering a post-scarcity era without also starting an AI singularity.

Though many approaches to “exactly-human intelligence” would just result in humans that happen to be robots, and we probably don’t want that, especially if the point is to make them do all our work for us.

I don’t know how to refer specifically to the alternative, though—an intelligence that can solve “human-hard” problems, without needing anything like sentience or self-determined goals/desires that would push it toward wanting things orthogonal to its designed purpose. Human-adjacent AGI? Human-competitive Tool AI?

dwc
Much of your response seems to orbit around the utility function, which is the devilishly tricky part from the start. Getting that part right is much more important than a specific level of intelligence.
kowdermeister
Possibly to make the world a better place.
goatlover
For the shareholders.
aaron695
This is actually a really good question.

It's a good flag of spin.

It seems absolutely not necessary until way way after robots have revolutionized society, without a expert commenting I say he doesn't get it or is just selling a product to naive consumers.

aje403
What's there to question? You're making assumptions by even asking his motives. Are you implying he should not be doing this ethically or something? Von Neumann responded to a quote from Oppenheimer lamenting the invention of the bomb: "Some people confess to the sin to take credit for it"
rogeroga
that is a great quote, indeed that happens at all levels, sounds like a cover up for false modesty.
mhb
It's humblebrag's cousin - the shamebrag.
joe_the_user
I skimmed the lecture after first ten minutes and as far as I can tell, while it seemed like it would be interesting to someone interested in what Boston Dynamics was doing, it doesn't really say anything about AGI beyond "we eventually want to do AGI and we think doing robot stuff well will get us there." If there's something I missed, could someone say.

Since this is part of an MIT lecture series on AGI, I think the GP is sort-of justified in also asking where that part is.

lopmotr
It's excellent that they've got large scale practical commercial applications in mind, rather than the niche search and rescue which every university robot researcher seems to talk about as their imagined application.
emilga
Very cool lecture!

It would be interesting to hear him speak about the pros and cons of different body plans. Are three or six legged robots useful, compared to four legged ones? What about multi-armed humanoid robots?

zaroth
Someone asked that question and he kinda punted on the answer. He said humanoid robots are theoretically useful for operating in spaces designed for humans, and also get orders of magnitude better responses from the public, but spreading out the motors and balancing tech in a quadruped gives much more space to work in.

See: T=46min

mooneater
Interesting:

- mentions military application without further comment, then dwells on elder care

- 21:05 robotics platform like "the android of robotics"

- 38:50 struggling with safety

- 48:00 "Im sure we will use learning before too long" Im surprised they arent using a lot of machine learning here.

CamperBob2
The video also pairs well with the penultimate Black Mirror episode in the latest season (S4E5, "Metalhead").
dirtshell
For the vast majority of robotics and controls traditional methods often have the best (implementation + research)/time value. Using any kind of machine learning usually means a lot of research and implementation time for widely varying results. Maybe if you had some ML specialists that could accelerate implementation and direct you on the right path, ML approaches could become more popular.

The simplest and most practical use of ML is in CV. I'd be surprised if they weren't doing some of that.

Iv
They aren't. You see QR code everywhere. I feel they are at the point I was before going into deep learning: they recognized the buzz pattern of an overhyped tech and won't believe it provides immediate gain until they try.
rojobuffalo
When you specify that the robot should open the door, and then a human interferes, how do you ensure the robot doesn't yank the door abruptly to knock the human back or swing its arm into the human to clear that space?
None
None
fladrif
Asimov's three rules of robotics? Are those being implemented?
zaroth
Marc actually discusses how safety is extremely challenging, e.g. people assume you can just have the robot freeze, but often freezing can cause more harm.

So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

See: T=37:50

derefr
> people assume you can just have the robot freeze, but often freezing can cause more harm.

Is this true of humans as well, or is it something unique to robots with their hard shells and hydraulics?

In other words, would freezing be a safe fallback strategy for a soft robot?

candiodari
This is a concept in robotics. You have "static stable" robots, where you can just slam on the emergency brakes, cut power and expect that nothing will break or die or kill after that point. Things like gantries are good examples, or 3d printers, or most factory robots, elevators, ...

You also have "dynamic stable" systems. They're systems that only remain stable if they're under control, which means that the software can never abort, as that can cause a disaster. A good example is a plane autopilot, or a car autopilot. If the software encounters a critical error, shutting down probably results in more damage than just giving (hopefully slightly) invalid output, in some cases while switching to an extended emergency stop process (like in a car) or attempting to proceed normally despite failures in the control system (planes). Systems are expected to simply do the best they can when they fail.

There's a special kind of dynamic stable systems, which are systems that once stopped, can't be restarted at all, ever, and therefore there simply is no "good state" at all, and you can't even go through a lengthy shutdown process and restart. Essentially systems that self-destruct when the software decides to abort. Rockets, some types of reactors fall under this category, quite a few chemical robots, as does of course any living being.

Of course all interesting systems are dynamic stable. Anything that has a balance, moves faster than a certain function of it's weight, anything that has active components independent of the control (like any reactor), anything that builds up momentum (like a crane, or most moving platforms), factory lines, ...

As a rule dynamic stable systems are much harder to control, as at every moment you must have a plan ready to get to a safe state, instead of just taking into account what you want to achieve.

Humans are dynamic stable systems. Inside a human there are many layers of control, none of which can be safely turned off, some of which are so critical that if they just fail for a few seconds the human dies. Also, humans tend to stand up, looking through pubmed you can easily find just how badly a human body can be damaged by simply collapsing on the spot. A human body, however, cannot deal with extended loss of control even when safely lying down.

Most of the control functions in the human body don't actually happen in the brain, and happen within the body proper ... In some cases it's a neural circuit directly attached to muscle tissue (famously the heart has a big one, but almost every muscle and many glands have something), sometimes it's portions of the neural system that can work independently of the rest, where the control is circuits in points where nerves meet (lungs, the womb are examples. A human body can give birth successfully decapitated, probably even without a spinal cord, and a human with a severed vagus nerve will keep breathing for quite a while, although not indefinitely), some of it is neural circuits connected to glands that work through chemical messages (the hypofyse, adrenal glands). Beyond that there's many more layers. The spinal cord. The cortex and finally the neocortex (which itself is subdivided in nearly 60 parts that are more-or-less separate)

namelost
It's true of everything. If you freeze when you are not in equilibrium, you will... fall over. Maintaining equilibrium is an active process.
tintor
That is why you build robots with size and strength of a child first, to limit amount of damage it can do, and to allow human adult to overpower it if necessary.
sgc
Unfortunately that is not financially rewarding enough and will be skipped for higher risk applications. But I think you are spot on.
erikpukinskis
I bet it’s pretty doable to program a robot to detect a yelp and then pull back 20%.

Many human reflexes are pretty simple heuristics.

Let them play with other baby robots and human trainers and they will learn. Same as animal babies.

Dogs are quite dangerous, but they learn how to not hurt other creatures.

candiodari
That depends.

If you say "at least as good as a human" then it's challenging. (by which I mean it's an ethics problem. Uber has ... ethical issues, but Tesla is much better already, and so far I'd say Google seems to have it covered)

If you say "perfectly safe, I don't care about humans" then it's impossible.

> So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

This is an example of the "absolute safety" standard. By that standard, if you were fair, you wouldn't let humans near each other. You certainly wouldn't let 2 humans lift a stretcher and run with it, and yet we do that all the time, live on public television.

https://www.youtube.com/watch?v=CPbdnafO93c

That's reality, not an abstract standard. That's what humans find acceptable behavior done to other humans that likely have a fractured bone : throwing them onto a field from half a meter high (and on occassion, onto concrete or the metal of an ambulance. With the broken bone first if you're truly out of luck. Ouch). As long as it's not done on purpose ... it's fine.

Over 1000 humans are killed yearly in the US just because other humans can't wait to sober up before driving. That's how much humans really care about safety. How much humans imagine/pretend they care about safety ...

arde
Stephen Wolfram says they're impossible: https://www.youtube.com/watch?v=P7kX7BuHSFI&t=4387
jlebrech
i'm sure darwin's law also applies.
TeMPOraL
The whole point of Asimov's three rules of robotics was to discuss how they make no sense in the real world, as the problem domain is too hard to be reduced like that.
GenericsMotors
Yep.

If you follow his Robots series right through to the end of the Foundation series, the theme is how robots R. Giskard and R. Daneel "evolved" to not follow a strict and rigid interpretation of the 3 laws, instead taking actions that are of long-term benefit to mankind even if that might mean short-term harm to a few individuals.

You also have the Solarians, who'd simply ended up programming their robots to only recognize Solarians as human; so if you're an Earthling or Auroran a robot will have no qualms about murdering you. That was pointing out the biggest flaw to the whole concept of the 3 laws: even in the Robots/Foundation universe where they have been implemented with great deal of success, they're only as reliable as those who define them.

lainga
Give all the humans robot arms. Rules of nature, Jack!
None
None
soheil
> Unfortunately, the above is the best available resolution of slides. Thanks for understanding. We're always learning and improving.

Hmm something doesn't sit right with me on this one. It's not just slides, it's videos too. It seems like they're apologizing for a mistake they might have made as opposed to intentionally downscaled the videos. Is it just a coincidence that the first time some of these videos are shown (not all) they happen to be extremely low res?

None
None
mark_l_watson
I recommend the entire MIT course on AGI http://AGI.mit.edu

Great guest lecturers.

xchip
Nice tiled floor, I guess that helps a bit with SLAM (just being jealous, great job!! :) )
Areading314
1/10

was hoping for the speaker to reveal himself as a robot at the end

iandanforth
The best thing about Boston Dynamics is that their robots don't work. Maybe spot-mini will but their track record is terrible and I'm thankful for it. I get the distinct impression that these self described 'badboys' would happily hand over a fully functional android to the military complete with gun mounts. There are a lot of great robotics companies to work for that are also deeply principled about doing good, this is not one of them.
richard___
Examples of such companies?
abrichr
Kindred.ai
lesss365
Weren't they getting funds from DARPA?
jlebrech
what's evil about the military?
GenericsMotors
There's definitely something evil (extremely unethical?) about handing the decision/burden to take someone's life to machines.

Also, I don't think your remark is a fair representation of the parent comment.

iandanforth
No no, they're right, the US military is evil. Your statement is also true though.
eli_gottlieb
Please see: past 15 years of nonstop war and corruption that hasn't done any good for anyone.
wmeredith
“Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some fifty miles of concrete pavement. We pay for a single fighter with a half-million bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. . . . This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron”

-Dwight D. Eisenhower

https://en.m.wikipedia.org/wiki/Chance_for_Peace_speech

thematt
Why do you say they don't work? The videos seem impressive.
hitekker
The GP works at an indirect competitor to Boston Dynamics.

Since the invective lacks suevidence, we can assume it is based upon resentment and/or jealousy, i.e., “how dare their unweaponized, prototype tech be light-years ahead of our own tech! They’re unprincipled!”

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.