HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Google Self-Driving Car Project | SXSW Interactive 2016

SXSW · Youtube · 7 HN points · 22 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention SXSW's video "Google Self-Driving Car Project | SXSW Interactive 2016".
Youtube Summary
Everyone's talking about self-driving cars these days, but how can you differentiate between hype and reality? In the six years of Google’s project, its vehicles have self-driven over 1.3 million miles, racking up the equivalent of 90 years of human driving experience. Google says its cars can now handle the vast majority of everyday situations it finds on the roads, but what does the path to a driverless future look like? How could it be that self-driving cars be both three and 30 years away?

Urmson shares his stories from the front lines of building the world's first fully self-driving cars.

Subscribe: http://www.youtube.com/user/sxsw?sub_confirmation=1

About SXSW:
Started in 1987, South by Southwest (SXSW) is a set of film, interactive, and music festivals and conferences that take place early each year in mid-March in Austin, Texas. SXSW’s original goal was to create an event that would act as a tool for creative people and the companies they work with to develop their careers, to bring together people from a wide area to meet and share ideas. That continues to be the goal today whether it is music, film or interactive technologies.

Connect with SXSW Online:
Visit the SXSW WEBSITE: http://www.sxsw.com
Like SXSW on FACEBOOK: http://www.facebook.com/SXSWFestival
Follow SXSW on TWITTER: http://www.twitter.com/SXSW


http://www.youtube.com/user/SXSW
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
That's Chris Urmson's talk at SXSW in 2016 [1], which has been widely viewed. It's one of the best talks on the subject. He said "How quickly can we get this into people's hands? If you read the papers, you see maybe it's three years, maybe it's thirty years. And I am here to tell you that honestly, it's a bit of both." Then he showed video of Google cars dealing with various challenging conditions. This included the famous scene where a Google self driving car encountered a woman in a powered wheelchair using a broom to chase a turkey. The software recognized this as something to avoid, even though it didn't classify it further.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

GeekyBear
The challenges Google thinks will take decades to overcome are those posed by streets that aren't new, extra wide, well marked, and with low traffic in an area that almost always has excellent weather conditions.

Basically, they have a solution that they are willing to put into limited testing in certain parts of Arizona, just as Apple has decided to do limited testing with autonomous employee shuttle vans between their campuses.

So Google has moved from "this is our moonshot program that we will solve in a couple of years" to "this will take decades of incremental progress".

Given that Google and Apple have both come to the concision that this is a long range incremental software problem, and given that you say that this represents an enormous failure on the part of Apple, how is this not also an enormous failure on the part of Google?

> Why not the other way around? You do the driving, and the computer is the backup in case you mess up?

Unfortunately having the computer takeover has the same issues. If it thinks you're following the wrong lane it could choose to steer you into a barrier, for example.

Chris Urmson's team at Google ran user tests at the start of their autonomous vehicle program and found that users misbehave [1] when behind the wheel of semi-autonomous vehicles, despite having been given instruction to pay attention. We're seeing that scenario play out in slow motion in Teslas and perhaps other driver-assistance programs that get less press.

[1] https://youtu.be/Uj-rK8V-rik?t=14m7s

They did publicize videos of their cars handling abnormal events. See https://youtu.be/Uj-rK8V-rik?t=25m.
SlowRobotAhead
Them programming for “unexpected” situations is not quite what is being talked about though.

The above scenarios are great, but I like the one Mercedes got in a little trouble for “is the cars priority your saftey or is it other people’s? You bought the expensive car, it should protect you first”, gets into a trolley problem pretty quickly.

sah2ed
The trolley problem is an interesting dilemma, TIL.

https://en.wikipedia.org/wiki/Trolley_problem

Got any links on the Mercedes issue?

SlowRobotAhead
I don’t. Basically someone high up in that division made the claim in some venue that of course your $80,000 car is going to put your security first, and it was a bit of an issue. I don’t remember if it was an internal Diamler event or a media snafu.

There is a logical answer though. He’s right. Each car is going to look out for itself primarily. They have to. No car is going to have a “sacrifice” mode.

Mar 04, 2018 · 2 points, 0 comments · submitted by KKKKkkkk1
Link to actual report.[1] (When you're posting a link to something important, post the link to the real document, not some pundit blithering about it. Yes, you may have to do some digging to find it when the pundit outlet doesn't provide a link.)

There's not much new here if you've been following what Waymo is doing. Chris Urmson's SXSW talk from 2016 [2] covered most of this material in more detail. This is more of a PR-level version of that content. The new material covers mainly the limitations Waymo is imposing on their system. It works only where they have very detailed maps of the road system, with more information than even StreetView. This is conservative, but not a bad decision for initial release.

They have a button to call customer service. (That's so unlike Google.)

[1] https://storage.googleapis.com/sdc-prod/v1/safety-report/way... [2] hhttps://youtu.be/Uj-rK8V-rik?t=6

whyenot
OP linked to a news article that presented a pretty good summary of the report along with some additional context. It was not "some pundit blithering about it."

I don't think there is anything wrong with linking to a news article like this that is published by a reputable news source such as Ars Technica. Not everyone is going to have the time to read a 43 page report, especially if it's not clear why they should be reading it.

cs2818
It would be nice if Ars Technica would provide a link to their source material (unless I'm just missing it).

I don't understand why so many news sources skip out on providing references.

Animats
To make web sites "sticky", so people don't go elsewhere.

That doesn't apply to YC, so when posting here, please give the link to the original source. Sometimes it's necessary to dig through three levels of blog to get to the source. (This seems to be especially true of articles about battery chemistry and surface chemistry ("nanotechnology").)

Can you give an actual example? Because, not being able to identify something is not necessarily an issue as long as the car notices something is there and it should not hit it.

EX: I am sure the car had no idea what this was: https://youtu.be/Uj-rK8V-rik?t=26m11s but as long as it can tell it's bigger than a bread box and so it should not to hit it that's enough.

notahacker
The obvious problem scenario is when unpredictable evasive action puts an autonomous vehicle into the path of other drivers (with human response times). That's when very sharp responses to an uncategorised "obstacle" that's actually a drifting plastic bag or a reflection cause more problems than they solve.

Similarly, instantaneous harsh braking might help an AI save the small child it didn't anticipate might chase the football that flew past moments earlier, but a human capable of grasping that footballs are associated with pedestrians making rash decisions might have braked early and gently enough to not get bumped by the car behind. (If they didn't, they might find their late reaction blamed for the accident and possibly even get prosecuted for driving without due care and attention).

The UK requires every learner driver to sit an exam consisting of identifying CGI "developing hazards" where they're scored on ability to rapidly identify stuff that might happen before they're allowed to do the full driving test. I'm sure a key focus of the teams the article discusses is teaching AI similar cases like gently slowing in the event of a football-shaped object moving near the road (which is likely far from the most difficult or obscure novelty to teach an AI to handle) but the problem space of novelties humans handle by understanding what things might be and how/if they are likely to move isn't small or one there's good reason to believe plays to AI's strengths

(Meta: not sure why you're being downvoted, your contribution seems constructive and on topic to me)

Retric
A bag* or child running into a street is not an usual event, also a car is not going to 'evade' into another car. Rare events are the things people don't see across multiple human lifetimes not just something you don't see every month.

Which IMO is what's missing from the debate, unusual events are in terms of ~10+ million miles of training data before these things are in production. They are clearly out there, but I doubt people are going to react well to say someone falling from an overpass onto the road very well either. So, it's that narrow band of really odd but something a person would respond correctly to that's the 'problem'.

PS: Of course the bag might relate to a bug which are likely. But, IMO that's a completely different topic.

notahacker
> A bag* or child running into a street is not an usual event, also a car is not going to 'evade' into another car

Opting to drive around a (stationary, visible from a distance) bag in an unpredictable manner is literally how Waymo's first "at fault" accident occurred...

The point is that a human has a concept of a "ball" linked to the concept of "children play football" and an understanding that if one sees the former, one should be prepared for the latter to bursts onto the road from behind the partially-obscured roadside. Appropriate action probably involves easing off the accelerator and lightly tapping the brake so the car behind gets a hint that you might have to stop suddenly on if a child emerges from behind a bush. An autonomous car which fails to anticipate even though it's lightning fast at slamming the brakes on is going to get rear ended a lot more.

The neural network of a self driving car might be able to classify small coloured spheres in the vicinity of the roadway as balls, and the AI will certainly have been taught the concept of a human-shaped obstacle moving across the roadway being a "need to stop" situation, but is unlikely to "learn" the association between the two through a few tens of million miles of regular driving, because only a very small proportion of "need to stop" events involve balls (and only a very small number of sightings of spheres moving in the vicinity of the roadway result in "need to stop" events). Of course, you can hard code a machine to respond to ball-shaped objects moving near roads by slowing down and you can construct a huge number of artificial test scenarios involving balls to teach the AI the association between balls and small children, but either of these options involves engineers envisaging the low frequency hazard and teaching it enough permutations of the sensory input for that hazard for it to be able to anticipate it (and there's a balance to be struck, because nobody wants a paranoid AI which drives through the city braking every time it sees something its neural network identifies as a pedestrian or the front of a parked car protruding from a driveway) Suffice to say, we take for granted our ability to know how to react to things like children chasing balls, staggering 4am drunks, tiny puddles the car in front just drove through versus a raging torrents of water through the usually safely navigable ford, sandbags versus shopping bags, vehicles laden down with loads which look like things which are not vehicles, and people frantically gesturing to stop.

Retric
Except, Waymo's is still in the R&D stage where this stuff is being worked out. The bag issue is exactly the kind of thing that will be fixed before production because it's in their training data. It's really the 'unknown unknowns' that will be the problem and they are going to be really odd.

As to the ball case, it does not need to stop just slow if it's possible for a kid to run out fast enough to be a problem. aka on residential streets when it should not be going fast in the first place not a highway where there are no major obstructions. So again, what your describing is in the 'to be dealt with' pile.

I don't mean it's working right now, just they are going to hold off on mass production until it's working.

EmployedRussian
> Opting to drive around a (stationary, visible from a distance) bag in an unpredictable manner is literally how Waymo's first "at fault" accident occurred...

There wasn't anything unpredictable about the behavior of Google SDC. From the official statement: https://www.engadget.com/2016/02/29/google-self-driving-car-...

"Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it.

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day."

Watch Chris Urmson's self-driving car talk at SXSW 2016.[1] Especially the part where the Google car encounters someone in a powered wheelchair chasing a duck with a broom. Google can handle that.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

unityByFreedom
You know you can link to a specific time in a YouTube video right? Pretty hard to find that 20 second section in a 1 hour video ;-). It is a good drawand worth watching that part first, in my opinion
What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.

Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.

If anything, Tesla should have learned by now that you don't want to need to recognize objects to avoid them. The Mobileye system works that way, being very focused on identifying moving cars, pedestrians, and bicycles. It's led to at least four high speed crashes with stationary objects it didn't identify as obstacles. This is pathetic. We had avoidance of big stationary objects working in the DARPA Grand Challenge back in 2005.

With a good LIDAR, you get a point cloud. This tells you where there's something. Maybe you can identify some of the "somethings", but if there's an unidentified object out there, you know it's there. The planner can plot a course that stays on the road surface and doesn't hit anything. Object recognition is mostly for identifying other road users and trying to predict their behavior.

Compare Chris Urmson's talk and videos at SXSW 2016 [1] with Tesla's demo videos from last month.[2] Notice how aware the Google/Waymo vehicle is of what other road users are doing, and how it has a comprehensive overview of the situation. See Urmson show how it handled encountering unusual situations such as someone in a powered wheelchair chasing a duck with a broom. Note Urmson's detailed analysis of how a Google car scraped the side of a bus at 2MPH while maneuvering around sandbags placed in the parking lane.

Now watch Tesla's sped-up video, slowed down to normal speed. (1/4 speed is about right for viewing.) Tesla wouldn't even detect small sandbags; they don't even see traffic cones. Note how few roadside objects they mark. If it's outside the lines, they just don't care. There's not enough info to take evasive action in an emergency. Or even avoid a pothole.

Prediction: 2020 will be the year the big players have self-driving. It will use LIDAR, cameras, and radars. Continental will have a good low-cost LIDAR using the technology from Advanced Scientific Concepts at an affordable price point.

Tesla will try to ship a self-driving system before that while trying to avoid financial responsibility for crashes. People will die because of this.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik [2] https://player.vimeo.com/video/192179727

_ph_
As far as I know, no car manufacturer ships cars equipped with LIDAR. Nor do they seem to have a camera setup as extensive as Tesla. So I fail to see how Tesla has painted itself into a corner. The worst case is, that they are not reaching full autonomy with the current hardware setup. It certainly would be a big marketing blunder if they don't, but if necessary they can add LIDAR to the production, if they choose to.
Matthias247
Shipping hardware that isn't needed is not how automotive engineering works today.

Most automotive projects look like: You want to release a new car model in year X (with is typically around now() + 3-5 years). Then you start a development project exactly for that car, which involves creating roadmaps for the car, creating the architecture, sourcing the components, packaging everything together and testing everything. Most components (including infotainment systems, driver assistance systems, etc). are contracted to sub-suppliers, which develop them especially for that car model (or maybe a range of models from one OEM). At the end of the development cycle you have a car which has exactly one car which has (hopefully) everything that was planned for that model and which will get sold. In parallel the development cycle for the next model begins, where there might be only a minimal reuse from the last one. E.g. it might be decided that one critical driver assistance component is sourced from another supplier, is now working completely different, and requires also changes for the remaining components.

So if you do not intend to upgrade something or reuse it, it just doesn't make sense to include additional hardware for it. We will see the required hardware in cars which also will make use of their functionality.

For Tesla it will be quite interesting if they will really deliver huge autonomous functions on that hardware, or whether we will see a new generation Model S (with overhauled hardware) before anyway. I'm personally pretty sure that we will see new model generations before the software will be on a "fully autonomous" level.

arkh
>where there might be only a minimal reuse from the last one

Not from what I saw from some constructors. Lot of software and parts are reused for multiples models.

freyir
They may have painted themselves into a corner by saying the current hardware is sufficient for "full self-driving capabilities." The worst case is they're putting half-baked solutions on the road.
TeMPOraL
Yeah. Theoretically the claim is true (at least wrt. the sensors, not sure about computing power onboard). Tesla's hardware is already better than human hardware for this task.

The trick is, they'd have to advance the state of the art in software quite far, to derive "full self-driving capabilities" from this hardware.

Shivetya
Chevrolet Bolt self driving test fleet comes this way. The difference is that Tesla has been talking loudly to get people to not see the short comings other companies are actively compensating for.

so what if they can add it to future production, their talk promises or implies they have all they need. yet cursory review of random youtube videos will show you how limited their system still is.

this may be another "war" they lead the charge on but falter in securing the win. you can play the car marketing game at times similar to the technology market but in the area of safety there is no compromise. instead of acting like a tech company pushing a new tablet they should have acted like SpaceX

pinouchon
I think Nissan has bet that in a few years, LIDAR will be inexpensive: https://www.youtube.com/watch?v=cfRqNAhAe6c
greglindahl
I think most people think LIDAR is going to eventually be inexpensive -- even Tesla has a few test cars with LIDAR. However, it's a fair weather sensor, so you'll have to do a lot more for level 5.
RealityVoid
I wonder if I want to put my money betting on this how should I proceed?
bitJericho
Invest in the trucking industry?
fragmede
They're probably right, since it's been reported that Google's Waymo has achieved just that - press reports from January say $8,000 per sensor which is positively affordable when compared to $75,000 for a competing sensor.

https://arstechnica.com/cars/2017/01/googles-waymo-invests-i...

mandeepj
> no car manufacturer ships cars equipped with LIDAR

For love of the god, please do not comment in the public if you do not understand the subject. Take a look at Chrysler + waymo minivans, Volvo + Uber suvs, Mercedes self-driving test cars. They all have lidars.

djrogers
None of which are production vehicles being shipped from manufacturers. They are test vehicles, nothing more.
coldtea
For the love of god, stop the hyperbole, and try to understand the comments you are answering better.

See how condescending this is?

stcredzero
For the love of god, stop the hyperbole, and try to understand the comments you are answering better.

I think that's good advice, very often applicable, even on HN.

Sometimes, s/even/especially/.

kayoone
Sorry, but you did not get the point op was making. None of those you mentioned are production cars like the Tesla, they are test mules. The point of op is correct, he/she is not talking about modded research cars.
edvinasbartkus
But no other company is selling the promise of an autopilot car. For example, Audi is expected to launch renewed Audi A8 this year with Level 3 autonomous driving but it's called "Traffic Jam Pilot". Level 2 was called "Traffic Jam Assist".

https://media.audiusa.com/models/piloted-driving

RealityVoid
> As far as I know, no car manufacturer ships cars equipped with LIDAR.

I can confirm that, at least in a sense, this is false. There are plenty of series cars with LIDARS, but not he scaning things you are thinking about, but simpler kind of lidar tech[1]. I know that is not what you were talking about, but I thought it's worth pointing out other, existing, alternative approaches.

[1]http://www.conti-online.com/www/industrial_sensors_de_en/the...

nojvek
I think Tesla not waiting for Lidars to get cheaper is the right decision. They can always integrate Lidar once they cheaper in the new models. Right now, no production car ships with a scanning Lidar that I know of.

Waymo cars are really expensive because of that and they cant scale because Velodyne can't make Lidars that quickly.

You also consider the fact that Lidars are literally beams of light being emitted and reflections measured. If you have every car with a Lidar, you get interference and its not the gold standard of measurement anymore.

Tesla conquering the problem with algorithms is the right approach. Remember our brains use algorithms and two cameras to drive too. So its technically possible.

Nvidia keeps on pumping out faster GPU's, cameras keep on getting better. Teslas getting more and more data. They just need better algos while they wait for cheaper sensors that scale.

That's a very wise business move while everyone else waits for a magic bullet.

RealityVoid
> If you have every car with a Lidar, you get interference and its not the gold standard of measurement anymore.

I had people working on these things that told me that interference is not an issue. Take this with a grain of salt but I would love to see some technical analysis about the possible impact of interference before dismissing LIDAR as susceptible to interference and the impact the interfecene it makes relative to the inference of positioning and spatial estimation.

I've found this: http://sci-hub.io/10.1109/ivs.2015.7225724 but it seems that while interference exist, from what I'm reading from the paper, they are not critical in the sense that they can't be worked around.

And if that is so, if you believe LIDAR is going to dominate the field, it's even more critical working with LIDAR early on so you have the know-how to fix the issues that might arise.

pkulak
Sorry, I kinda tuned out when you mentioned lidar like it's some requirement for autonomous driving.

Lidar is AMAZING for giving press demos on sunny days. For the real world with rain, snow, leaves, plastic bags, etc? Useless.

The future is radar + cameras + a LOT of software blood, sweat and tears.

ckastner
> The future is radar + cameras + a LOT of software

... all of which Waymo's solution also has, in addition to LIDAR.

kamaal
I will take your self driving car seriously if you can drive it in all conditions in India.

Until then its just an attempt to make something that breaks at the next unanticipated exception.

coldtea
Well, most non-Indian drivers also can't make it in "all conditions in India" so that's a moot point.
bjpirt
Made me laugh but it's a very true statement. When I was driving whilst holidaying in India I realised that the main rule of the road was "largest vehicle wins"! This actually made for quite an easy to understand system with few questions of whose right of way it was.

bike < car < van < truck

which makes sense because if you're the one who's going to come off worse in an impact then you really want to give way - especially if you have a massive painted tipper truck hurtling towards you!

It will be very interesting to see how self driving systems can cope with these local unwritten bylaws.

coldtea
>This actually made for quite an easy to understand system with few questions of whose right of way it was. bike < car < van < truck

That doesn't make much sense, because the main (and most common) question would still be between vehicles of the same class: car vs car, and this doesn't solve it.

kamaal
>>It will be very interesting to see how self driving systems can cope with these local unwritten bylaws.

This is why self driving AI will require Hard AI.

India is a perfect test bed for these people to test their algorithms. And for heaven's sake why would you test it in some place like the US. Cars in US are pretty much trains on road any way.

jamespo
Because that's where the (easy) money is
kamaal
I get it. They would want to release their cars in US or Europe as their primary markets.

But even in those cases it makes sense to test in India. Why? Sooner or later you will have some situation in the US which may resemble daily traffic conditions in India. Imagine a law and order situation where people are running around without regards to traffic laws. Or some other situation where traffic is being rerouted through a wrong way, In US may be as an exception traffic is being routed through the left lane(being a right lane drive country) etc etc.

For all these situations you will very soon need a test environment that provides you with all situations to test.

coldtea
>For all these situations you will very soon need a test environment that provides you with all situations to test.

Which won't be India.

Because you obviously don't want to be testing unprecedented conditions with live subjects in actual traffic.

If anything, that will only happen after tons of simulations of such conditions in fake environments.

danielbln
It will most definitely be interesting, but it's silly to say that self-driving cars can't be taken seriously, if they don't master these conditions (yet). Hell, I couldn't master those conditions myself, nor do I need to, because I live in the the inner city of a Western-European metropolis, so what I need my self-driving car to do varies massively from what people in other regions of the globe may need it to do, but that doesn't make it any less useful for me.
nl
I wouldn't drive in India. And I'm human.
geodel
You are only human. Its super human who can drive there.
jfoster
I imagine a self driving car will drive there similarly to a human; slowly, because of the extreme risk. Potentially humans tend to take more risks than they should, so a self driving car in India might seem to drive differently than a human would. But the algorithm stays the same; drive at a speed matching the obstacle risk, and avoid hitting things. Perhaps the parking aspect would be the most different.
tacomonstrous
IMO, India has the simplest driving algorithm: Keep going where you're going and don't hit anything. Also, honk if you think you might be in someone's blindspot.
saiya-jin
that is important only if you live in india, or any other 3rd world country with similar disrespect to anything resembling a traffic rule from a mile away.

western world will be perfectly happy with cars that can only drive on our roads, and eventually manufacturers will pick up on other places as well.

look at the potential bright side - this might bring some order to mess one can see daily in bigger cities on roads.

fooker
Human beings can not drive in all conditions in India.

Just to get our car out of the garage, I had to plead and negotiate with N vegetable vendors with makeshift stores on the road.

Also: bicycles, motorbikes, rickshaws(in human pulled, CNG and electric varieties!)and pedestrians mixed in traffic everywhere.

kamaal
>>I had to plead and negotiate with N vegetable vendors with makeshift stores on the road.

This is a very practical test case for a car on a road. Not just in India but anywhere in the world.

Instead of N vegetable vendors you could have N traffic cops. How do you manage the human interaction part in the self driving car?

owenversteeg
Yep, I have a feeling that for driving in India/Iran/Pakistan/Bangladesh you're going to need a strong AI to negotiate with the street vendors. In Iran I even had a particularly... enthusiastic flower seller actually maneuver himself to make it even harder to drive away.

Not to take away from anyone's work in this area, but I have no idea how long it'll take to go from "works in America" to "works in India". In many countries the safest option (to evade disaster) can occasionally be "floor it and break the speed limit" to get away from x dangerous thing. I'm not sure if that's something that Google is willing to write into an AI.

manmal
This will require something like Rick's (from Rick and Morty) car/spaceship.
PeterisP
An autonomous driving feature that you can use on most days but not all of them is still very useful and would have a good market.

It's not going to rain or snow today, and if it would, then I can take the wheel myself.

LMYahooTFY
Isn't Tesla parterned with Nvidia to solve the computation side of things?

I thought I remembered Nvidia presenting some additional stuff about it in their Tech demo recently.

cr0sh
I know Karpathy wasn't involved in the "end-to-end" research paper Nvidia published - but I wouldn't be surprised if they weren't involved somehow, and if anyone could push the CNN tech in that paper further it just might be Karpathy.
freyir
That's like saying our eyes are useless because we can't see in the dark.
kamaal
Correct, eyes in fact are useless if you can't see in the dark.

Now imagine walking on top of a sky scraper in pitch darkness. Yes your eyes work in light, but in this case you will likely fall to death.

loa_in_
You can't really drive in the dark, can you? What if it gets dark for 1 second on a cliffside turn?
mschuster91
There's always the option to enable the headlights in such a case. LEDs switch on way faster than incandescent bulbs. And if they're needed at all - a camera has way more flexibility in brightness input range than a human eye.

A car like a Tesla has also highest-quality maps and GPS sensors - these alone are way better than what you get in your smartphone and are enough to keep the car from going over the cliff.

Capt-RogerOver
It's pretty obvious that a self-driving camera-based software would take advantage of headlights on a car, just like a human does? So it never has to drive in full darkness.
kamaal
Actually you will need more than a camera input, may be even a sound input.

In darkness it helps if you can hear the vehicles coming close.

cr0sh
> may be even a sound input

Bingo. When we drive a vehicle, we use so much more than just our eyes to sense the environment, and hearing plays a very large part.

I believe that it is something that warrants research for self-driving vehicle usage; I don't know if anyone has done such research, but I haven't seen any papers on it yet. If not, it seems like an underappreciated sensor aspect that could potentially greatly augment self-driving vehicle capabilities, and would be a very simple and cheap sensor to add to a vehicle as well.

EDIT: Found this recent article...

https://www.technologyreview.com/s/604272/a-sense-of-hearing...

While it seems to be focused mainly on diagnosing issues with vehicles before they become larger problems, there are hints about it being used for self-driving tasks as well.

Diederich
I think you're implying that deaf drivers are fundamentally less safe than hearing drivers.

Studies have been highly mixed: http://www.lifeprint.com/asl101/topics/deaf-drivers.htm

I would be comfortable saying that the advantages a deaf, many eyed, always alert self driving system would far outweigh the safety of a hearing, two eyed and sometimes alert human driver.

PKop
You missed the analogy:

Darkness is to eyesight as inclement weather/obstacles is to LIDAR

JshWright
What's the point of that analogy? Darkness can be fixed with lights on the vehicle itself. Inclement weather is entirely outside the control of the vehicle.
vsl
LIDAR doesn't work in inclement weather, that's the point.
sqeaky
Our eyes also come with our brains packed full of fancy algorithms to extract value from minimal information.

I am not sure what software does with noise from a lidar sensor but I have seen data from other noisy sensors and they are often useless.

cm2187
Actually in the dark you often drive as a leap of faith in the state of the road. I.e. with very little visibility on what can come from the side of the road (no light) or after a turn. We shouldn't. But we do.
jacquesm
Yes, but it will cause you to slow down. Or at least, it should. As soon as you have less than your stopping distance of space in front of the car you are going too fast.
cm2187
Again we probably should but we don't always.

In fact even in daylight you drive a lot as a leap of faith. When the traffic light is green for you at a crossing and you see a car arriving on the side, you assume that you have the priority, that the car will stop and you go ahead without adjusting your speed to the coming car. This is a leap of faith in the fact that all other cars will follow the rules.

coldtea
Not really. Self driving cars should be able to drive in all of those conditions (and one condition might even quickly turn into another, e.g. sunny into rain).

If a technology only helps with some of the cases (e.g. fair weather) and does not work for the others, then there are two cases:

(a) A single replacement technology will be found that works in 100% of cases.

or:

(b) The technology will only be used on the cases it works well, and the other cases will be handled by some alternative technology equally only suited to them.

In the case of (a), Lidar is indeed useless (or at best, only used as a supplementary technology in favourable conditions).

And I fail to see how (b) can be the case -- that is, how there can be another technology that will solve the rain/snow/night driving problem, but which cannot also outperform/replace Lidar for fair weather driving.

freyir
> then there are two cases:

Isn't it interesting that we have five senses, when we could just have one that works in 100% of the cases? A third option is a system based on multi-sensory inputs. Several inputs that are just marginal on their own can provide good performance when combined.

espadrine
Our eyes are not LIDAR, though, and we can drive pretty well.

In fact, the reason we have crashes is NOT our eyes’ lack of distance detection through laser return timing — having two eyes is enough for distance appreciation. We have crashes because of attention deficit instead.

At this point, there is no reason to believe that a machine can't achieve and outperform a human on a driving task given the same inputs. Sure, human eyes have 5 million cone cells and 1080p feeds only have 2 million pixels, but 4K has 9 million, and more importantly, that level of precision is unnecessary for regular driving.

And Tesla doesn’t even bet just on the visible spectrum; it also relies on radar.

TeMPOraL
The trick is in our wetware. What the brain does with visual input is not just trivial object recognition. It relies on a complex internal model of the world to both augment the object recognition and to sometimes discard the visual data as invalid.

So sure, theoretically cameras would be enough. But we're not yet there with software, we can't use the camera input well enough. So if you can side-step the need for not-yet-invented ML methods by simply adding a LIDAR to a sensor suite, then it's an obvious way to go.

Compare with powered flight: we didn't get very far by trying to copy the way birds do it. The trick is in the super-light materials birds are made of, and the energy efficiency of their organisms. We only succeeded at powered flight when we brute-forced it by strapping a gasoline engine onto a bunch of wooden planks.

espadrine
> It relies on a complex internal model of the world to both augment the object recognition and to sometimes discard the visual data as invalid.

That in particular is what makes the hiring fascinating. This problem is Andrej Karpathy’s expertise[0]. His CNN/RNN designs have reached comfortable results, in particular showcasing the ability to identify elements of a source image, and the relationship between different parts of the image.

The speed at which those techniques improve is also stunning. I didn’t expect CNNs to solve Go and image captioning so fast, but here we are!

I think the principles are already there; a few tweaks and a careful design is all it takes to beat the average driver.

[0]: http://cs.stanford.edu/people/karpathy/main.pdf

ebalit
Image captioning is not solved (yet), even if there was a lot of progress made in recent years.
cr0sh
I think LIDAR will have a place, if they can get the per-sensor cost down to something reasonable (under $500.00 per sensor, for 3D 180 degree with at least 32 vertical res beams - or the equivalent).

But I think first we'll see cars utilizing tech as described in this paper:

https://arxiv.org/abs/1604.07316

...and variations of it to handle other modeling and vision tasks.

Self-driving vehicle systems are amazing complex; it won't ultimately be any single system or sensor, or piece of software or algorithm that solves the problem - it's going to be a complex mesh of all of them working in concert.

And even then, there will be mistakes, injuries, and deaths unfortunately.

sametmax
When musk said they were doing the first decent electrical sport car, everybody became an expert and said it won't work. Then he decided to do solar city and space x. And the crowd said again "naaaa".

Now the guy is tackling another hard problem and everybody knows better.

sixQuarks
My thoughts exactly. Elon Musk is not known for vaporware or BSing. If he says something is doable, there's a good reason for it. Musk has a combination of vision, an understanding of physics, and execution ability unmatched by any other person (in my opinion). To say that he has painted himself in a corner is highly misinformed.
unityByFreedom
He doesn't have any educational background in AI or it's underpinnings. Running open AI doesn't instantly make him an expert. The above comment is from an expert with experience in the self driving space.
sixQuarks
his educational background also didn't include rocket science but yet spaceX has succeeded.

If you'd like to put money where your mouth is, I'd happily do a wager with you. Tesla will be the first to have a fully autonomous vehicle - care to bet against that?

stcredzero
I made the same type of wagers with Thunderf00t with regards to SpaceX. No taker!
unityByFreedom
> Tesla will be the first to have a fully autonomous vehicle - care to bet against that?

Yes, I would. Google is way ahead in the tech. Nevermind that they don't have a product. If we're talking strictly about the tech, Tesla is and always has been far behind

sixQuarks
$1,000 wager?
unityByFreedom
Eh, if I knew you in person I would, but this is a loosely defined term and will likely occur iteratively over the years. Also I'd only take the bet while Musk is still in charge and while he is anti-lidar.

Feel free to message me if your side of the bet comes true. But, I really doubt it will. If Waymo forms a partnership with any major manufacturer they'll pretty much have it.

sametmax
And people from NASA and the CNES said space x was moot.

Can we just let the man try without being annoying the whole way ?

Or do something that helps ?

Or do something at all to understand that it's hard enough and that you really don't need a thousand voices enumerating all the reasons you might fail ?

unityByFreedom
I think it helps to give constructive criticism.

Musk is adamant that lidar isn't necessary. Many disagree and are voicing their opinions on that.

I also think his view that strong ai is around the corner is detrimental to the industry. He is overhyping things and potentially creating an expectation whose investments reality won't support.

So, when I spend time pointing out he is not an expert in the ai space, it is to soften his outlandish predictions in the space, and help bring a more realistic perspective on who is talking and who knows what they're talking about.

I think Musk will contribute something to the self driving and AI space, just not in the way he claims.

sametmax
This is not formulated as constructive criticism. This is formulated as prophecies. "It will fail because...". "It can't work because...". Or formulated in the form of "I know better".

A constructive criticism would weight pros and cons, explain POV without condamnation and not make grandiose pronostics.

unityByFreedom
Odd that you would put quotes around things you made up and neither I nor the first comment in this comment-chain said.

Animats gave plenty of detailed constructive criticism, as did I. Neither comment could be simplified as you allege without leaving out important context.

babyrainbow
>Elon Musk is not known for vaporware or BSing...

Hyperloop though..

I mean, I think bullshit can be sold to even 'techies' aka HN crowd, if it is wrapped just the right way...

Robotbeat
Huh? Musk gave away the idea, never said he was himself going to do it.

But in spite of that, technically, Musk has already built a subscale Hyperloop nearly a mile long at the SpaceX campus in Hawthorne including an electric sled used for student competitions: (epilepsy warning) https://www.youtube.com/watch?v=moeI8DxQR5Q

chc
That is neither vaporware nor BSing because Musk never suggested he was going to build the Hyperloop. He explicitly said the opposite, that it was an idea he thought was neat and he hoped somebody else would work on it.
ethanhunt_
The difference between Tesla and Google is that Tesla actually has to ship these cars to customers right now. Wouldn't LIDAR double the cost of a Tesla right now?
sbierwagen
Classic engineering ethics problem. Management says they have to ship "right now", but you know that if you do, 1 in a 1000 customers will die. If you wait a couple years, that'll go down to 1 in 1000000, but your company might go bankrupt.

Engineers at Takata and in GM's ignition key department made one choice, Waymo seems to be making the other.

manmal
Waymo has the benefit of not going bankrupt if they wait another couple years..
sbierwagen
Oh? http://www.cbsnews.com/news/google-alphabet-moonshot-project...

The whole reason the car project got spun out into Waymo was to fast-track commercialization. They do not in fact have an infinite amount of money.

gordon_freeman
Cost should not be an excuse at the expense of safety. Plain and simple.
pkulak
What's unsafe about radar cruise plus lane keep? People act like Tesla is shipping cars that fling themselves into pedestrians at every opportunity. Somehow we all manage to absolve the auto maker when someone with cruise control set on their 1998 Mazda rear ends someone on the freeway. Let's judge Tesla for autonomous safety when they produce an autonomous car.
Animats
What's unsafe about radar cruise plus lane keep?

It takes longer for a driver to react to a problem in that mode than to react without it.[1][2] There have been full-motion car simulator and test track studies on this. Even with test subjects who are expecting an obstacle to appear, it takes about 3 seconds to react and start to take over vehicle control from lane keeping automation. Full recovery into manual, where control quality is comparable to normal driving, takes 15 to 40 seconds.

There are now many studies on this, but too many of them are paywalled.

[1] http://www.sciencedirect.com/science/article/pii/S1369847814... [2] http://www.iea.cc/congress/2015/252.pdf

babyrainbow
I am not sure why this tech is acceptable, at all.

People fall asleep even while actively driving the car. How can they be expected to maintain vigil with something like this.

But I guess Tesla is content with ending their responsibility at "Informing the user that they should be vigil at all times, even when car is driving itself", with out considering how feasible it is.

Another funny thing about it is that, earlier, with regular cars, you only had to watch the errors from the other drivers on the road. Now you have to watch other cars and also mistakes made by your own car's AI...

What could go wrong?

simonh
If that was true, nobody would ever ship anything to do with safety for less than a million dollars. We make trade-offs between cost and safety all the time. Doctors walk that line day by day, it's a big part of their job. In particular, car safety regulations walk a very fine line between safety and cost. No car regulations require every car to have all and every one of the best and most advanced safety features. If they did, no cars could be sold for less than hundreds of thousands of dollars and they'd all look and perform like blocky vans with great huge crumple zones.
Animats
Tesla didn't have to ship a car with unusable self-driving hardware. They'll probably have to eat the cost of a retrofit package on some vehicles to make that work. Like the Roadster transmission problem, where they had to replace all the early drivetrains with the two-speed transmission.

Nobody has built automotive LIDAR units in volume yet. That's why they're so expensive. It's not an inherently expensive technology once someone is ready to order a million units. It does take custom silicon. Tesla, at 25K units per quarter, may not be big enough to start that market.

Continental, which is a very large auto parts maker in Germany, has demo units of their flash LIDAR. They plan to ship in quantity in 2020. Custom ASICs have to be designed and fabbed to get the price down.[1]

[1] http://www.jobs.net/jobs/continental-corporation-careers/en-...

koja86
Isn't it possible that by the time LIDAR is technically and economically ready for general deployment current Tesla models will have enough mileage and Tesla avoids retrofitting completely?
unityByFreedom
How could Tesla avoid retrofitting if they are unable to solve full self driving with cameras alone? Their new vehicles would have lidar, and old customers who were promised FSD in their models would seem to be legally entitled to it, given that Tesla is currently selling that feature as a product, even though it isn't functional.
taneq
I know it's popular to bash on Tesla's current challenges with their Autopilot software, but I think it's a bit unfair to expect them to be back in front of the pack just yet. They had their big breakup with MobilEye in, what, September last year? Nine months is pretty quick to go from total reliance on a vendor package to a reasonably functional fully in-house system.
CardenB
Tesla had been working on a mobileye replacement for some time before they parted ways.
sobani
Can you explain the 'back' part of expect them to be back in front of the pack ?

As far as I know they've never been in the front of the pack.

jasonlingx
Actually, they are and have always been in front of the pack.
taneq
With AP1 they were the only company that had anything approaching level 2/3 in a publicly available production vehicle. Google might have been ahead of them but it's hard to tell since all we ever saw were carefully staged demos.
BoorishBears
Was it? Weren't things like Distronic+ w/ Steering Assist (and Audi and BMWs equivalents) almost the same thing?
taneq
Similar, but from what I've read, Tesla did it better. I think this was the comparison I was thinking of in particular:

http://www.caranddriver.com/features/semi-autonomous-cars-co...

BoorishBears
I chalked some of the differences to being more defensive than AP

For example, I'd be surprised if you took one of those competing systems to "fail road" and it started to veer the way Tesla's system does instead of disengaging

tim333
>Tesla will try to ship a self-driving system before [...] People will die because of this.

On the other hand pushing the envelope on self driving technology using cheap sensors will probably help reduce the world's 1m annual auto deaths earlier than otherwise. Thousands of people will not die because of this.

make3
They really only hired a well known guy cool guy at a post with a lot of exposure. Your comment feels like pretty intense overanalysis to me.
Animats
Their previous guy quit. That indicates a problem.
evaneykelen
Human drivers are also not equipped with a LIDAR. We rely on stereo-vision combined with a couple of low-tech instruments (rearview mirror, left and right side mirrors and looking-over-shoulder to achieve approx 250-300 degrees view of field) to navigate the road in a vehicle. If you extrapolate the current state of AI and treat 10-20 in-car cameras + radar as the equivalent of what a human brings to the table then I fail to see why Tesla has painted itself in a corner.
jgalt212
true, but bird wings flap and airplane wings don't flap.
sqeaky
Do submarines swim?

Doesn't matter, only the results matter. Planes work and work well, they crush birds in every performance metric and sometimes literally. Can a self driving car be made safe without lidar? I suspect so, but I am not certain, but I am no expert.

localhost
Agreed. Raquel Urtasan's research is the cutting edge in this area: https://www.cs.toronto.edu/~urtasun/. And she was recently hired/retained by Uber to lead their robot car efforts in Canada. Here's a recent video of her research from the National Academy of Sciences: https://www.youtube.com/watch?v=sW4M7-xcseI
slurple
Have there been advances in using LIDAR in rain/fog/snow? For all the autonomous car demos this seems like too large of a use case to gloss over...
samstave
I find it funny that people can seemingly flippantly state that Tessa (or any of these cars) have "weak sensor systems" as if it is such a trivial problem.

"Well, there's your problem right there, let's just slap on some strong sensors and you should be good to go!"

You know what has a weak sensor system? Any car without any sensors.

mnarayan01
Yea this seems like the key question. I don't know if Level 5 in two years is feasible, but if it's Level 5 or bust, then LIDAR won't fly (AIUI; would certainly welcome corrections).

On one hand, not pulling in potential safety improvements because they only work in good weather seems wrong, but on the other hand...that might be what needs to happen from a cost/marketing/legal perspective.

Animats
Yes, but they haven't made it down to the automotive level yet.

Most automotive LIDARs just report the time of the first return, but it's possible to do more processing. Airborne LIDAR surveys often record "first and last"; the first return is the canopy of trees or plants; the last is from ground level.

It's also possible to use range gating in fog, smoke, and dust conditions.[1][2] Returns from outside the range gate are ignored. You can move through depth ranges in slices until something interesting shows up. This seems to be in use for military purposes, but hasn't reached the civilian market yet.

Range gated LIDAR imagers have been around for at least 15 years. By now, it should be possible to obtain a full list of returns for each pixel for several frames in succession, crunch on that, and automatically filter out noise such as rain, snow, and dust. It's a lot of data per frame, but not more than GPUs already handle. Some recent work in China seems to be working to make range-gated imaging more automatic in bad conditions.[3]

[1] http://www.sensorsinc.com/applications/military/laser-range-... [2] http://www.obzerv.com/en/videos [3] http://proceedings.spiedigitallibrary.org/proceeding.aspx?ar...

Diederich
> Tesla will try to ship a self-driving system before that while trying to avoid financial responsibility for crashes. People will die because of this.

This is a pretty strong statement. Would you sign up for a slightly more specific version of your claim?

"I believe the Tesla self-driving system that ships by the end of 2020 will be statistically less safe than unassisted human drivers."

lianqiao9
Full autonomous is conflict with Tesla's business model: selling cars. I think Tesla's real goal is to build the easiest driving massive production car. Massive production means the car always need a person to baby sit. For full autonomous, they don't need massive production (>10M units) to build the service network.
jacquesm
> People will die because of this.

And not just drivers of Teslas.

bluthru
>Or even avoid a pothole.

Humans can avoid potholes with one eye. I don't know why you assume LiDAR is a requirement for this.

narak
One eye backed by millions of years of training in depth perception and object recognition... The argument that humans are able to drive with just two cameras and software (brain) is deeply flawed because the brain is highly advanced in areas necessary for driving and Tesla's claim it will replicate that any time soon is absurd. This is why you need additional sensors like LIDAR to alleviate the computational load.
happosai
If only the brain would be smart enough to focus on driving instead of distractions...
kamaal
The human brain is a hard AI entity that can think through problems in any generic situation.

In self driving AI you are programming the car to do a specific thing. Sooner or later you will run into a situation in which algorithm will panic and can't do much.

fmihaila
Humans cannot and do not "think through problems in any generic situation", and specifically when it comes to driving, they fail with fatal consequences about 100 times every single day in the U.S. alone. Applying such unreasonably high standards (i.e., algorithms that never fail) before we allow self-driving technology to be deployed actually kills people.
sqeaky
I am not convinced Kamaal was arguing against self driving cars. He might have been arguing that they could be simpler and more achievable. It could be just fine as long as when the software panics it does something sane like slow to a controlled stop and turn on the hazard lights.
nnq
Yeah, Tesla may fail to ship a safe LIDAR-less car.

But the problem of making a self-driving car without LIDAR or something equivalent is awesomely challenging! An I bet Andrej Karpathy will really enjoy working on it.

And the tech resulting from this line of work will surely find its way in other things. (I guess the military has wet dreams about this stuff... I mean, even "unsafeness" can become a "feature" here: "uhm, look, that school we blew up by, uhm, mistake... was an AI-error... like... these stuff happens, you know, even Tesla's cars have an accident from time to time, that's life". Well, those dreams could also be nightmares: basically any "self driving thingie" is a potential guided missile, and dirt-cheap-because-lidar-less stuff has the potential of becoming ubiquitous, and unmaintained/unupdated/unsecured/hackable, leading to nightmarish urban warfare scenarios...)

And: "People will die because of this."... Uhm, yeah, they will, but if people ain't dying it means research is not moving fast enough, and competition will overtake you. I'd be more worried about when this stuff will be deployed on buses with tens of people, but hopefully public transport would stay a safe decade behind bleeding-edge stuff :)

And about and Tesla: however this plays out, Elon Musk made quite a lot of what would've been technically considered "bad business decisions" and things turned up OK so far... so I wouldn't feel sorry for them or short their stock ;)

denzil_correa
> but if people ain't dying it means research is not moving fast enough

Could you help me understand this further? It feels quite insensitive to me.

tintor
If companies wait for the tech to be perfectly safe, it will never be released.
Matthias247
> Prediction: 2020 will be the year the big players have self-driving.

I think that depends on what you mean with self-driving. My prediction is that in 2020 we will have slightly better driver assistance systems. Maybe a lane assist system which won't kill me if I don't babysit it on a road with construction ongoing and multiple lane markings or if corners get too tight. Maybe some limited self-driving, e.g. only available in dedicated areas like highway/autobahn.

Keep in mind that 2020 is 3 years, which is less than the development cycle of a car. Or in other words: If something should get released in 2020 you now would already see it driving around on the roads for first tests.

My personal prediction is that we will see reliable and advanced self-driving technology in mass production cars maybe in 2 full car generations from now - which is 2030.

Animats
2020 is the year major manufacturers say they will have self-driving cars on the market.

- Volvo [1] - Chrysler [2] - Ford [3] (2021) - GM [4] (2018, first live test fleet) - Toyota [5] (although they just started with self-driving)

[1] https://www.digitaltrends.com/cars/volvo-predicts-fully-autn... [2] https://www.usatoday.com/story/tech/news/2017/04/25/google-s... [3] http://money.cnn.com/2016/08/16/technology/ford-self-driving... [4] http://fortune.com/2017/02/17/gm-lyft-chevy-bolt-fleet/ [5] https://www.wsj.com/articles/toyota-aims-to-make-self-drivin...

jacquesm
> Maybe a lane assist system which won't kill me if I don't babysit it on a road with construction ongoing and multiple lane markings or if corners get too tight.

Or if someone else makes a mistake.

That's another scenario that can have very unpredictable and immediate consequences ranging from 'nothing' to 'two car accident with fatalities'. Even in relatively placid (when it comes to driving) NL I see this kind of situation at least once per year.

Then there are blow-outs and other instant changes of the situation. I do believe that especially in those cases it should not take long before computers are better than humans because of their superior reaction speed.

notheguyouthink
Yup, these are the reasons I'm not interested in those "hands-free but still assisted" autodriving deals. I'd love cruise control assist and lane assist in my car, but that's about all I want until I can be 100% sure cars can handle the situations by themselves.
daveguy
I believe driver assist technology will get more and more advanced to the point where the car is essentially driving and you are giving suggested feedback. In other words -- the bad situations will be handled themselves. Following distance might be enforced and come with a warning, unless you really want to get dangerously close -- the car will "help".

Your viewpoint is how self-driving cars will come to be accepted into the mainstream. It will essentially sneak up on the average driver.

I would be extremely nervous to let a fully autonomous vehicle drive me given the current state of the art.

Animats
Either the human or the computer has to drive. Throwing control back to the human on an automation failure and expecting a correct response from the human takes too long. This has been well-studied. It takes at least 3 seconds for the driver to take control at all, and 15-40 seconds before human control has stabilized. I linked to some studies in an earlier post.

The aviation people hit this problem in the 1990s, as autopilots got more comprehensive. Watch "Children of the magenta", a talk by American Airlines' chief pilot on this.[1]

[1] https://www.youtube.com/watch?v=pN41LvuSz10

daveguy
Right. The driver would be in control in the scenario I'm describing, just like they are with lane assist, cruise control, or closest to stealth self-driving, auto braking and swerve. I guess it would be better described as the driver would always have the illusion of control.

The automated parts would kick in when necessary and they would be increasingly intrusive in their warnings and modification of driving until the human is only needed for identifying source and destination.

brianpan
You set you destination for: gym

Are you sure? Your calendar says that Bobby needs to be picked up from practice.

Why don't I go ahead and set the destination for you?

kamaal
>>I think that depends on what you mean with self-driving.

It means a car that doesn't have a steering wheel, brake and accelerator. That kind of a car is quite far.

oblio
That's at least 4-5 automotive cycles away, IMO (so 25-30 year away).

And that's for the high end. The low end cars are probably 50-60 years away, IMO.

justin66
The notion that the low end cars will take decades longer is very, very ridiculous.
oblio
Ummm... how many cheap cars have: adaptive cruise control or lane assist? Both technologies have been available for expensive cars for what now? 15 years?

Really cheap cars don't even have automatic gearboxes (or they are bought without them 90% of the time, outside of the US).

By "cheap car" I mean something cheaper than $20000 and "really cheap" would be below $12-15000.

cr0sh
Here in the US I don't even think you can buy a "really cheap" car; my current vehicle is a used Jeep TJ and it set me back $18k (great condition though - still, I overpaid a bit, but I consider it the price for getting the options I wanted on the vehicle).

While I'm sure there are probably new (or relatively new) vehicle out there sub-$15k, they probably aren't much to write home about. Certainly nothing I want (that's just my personal wants - for instance, if it ain't 4wd and/or it can't be lifted, I don't want it).

As far as the transmission is concerned, here in the US most vehicles can't be bought without an automatic. Manual transmissions are becoming the exception; most car models don't even have them as an option. The few that do (mostly trucks and sports cars) have seen declining sales of the option (and I wouldn't be surprised if it actually costs more to get it!).

justin66
I imagine that if you waited for the right manufacturer's incentives and so on, you could get a Toyota Yaris here for $13k, new, with financing. It's a very fine car for someone who can comfortably fit in it.
justin66
Ok, with "really cheap" you're talking about a species of cars that I'm unfamiliar with.

Here in the US, the sensors and control fly by wire that Lexus used a few years ago have largely trickled down to the Corolla and the like as a standard feature. The differences between making autonomy work on low end and high end cars will be purely a software problem. That's not the kind of additional work that takes thirty years.

oblio
The difference is not purely a software problem.

It's a price problem since you need to install a bazillion sensors, motors and other thingies which are basically used only for one purpose.

And regarding really cheap, you're on HackerNews, ergo less likely to ever meet those. But it's usually cheap models from cheaper brands such as Dacia, Hyunday, Kia, Tata, several of the Chinese brands. I doubt most people you know actually own one :)

justin66
> It's a price problem since you need to install a bazillion sensors, motors and other thingies which are basically used only for one purpose.

That's an overstatement. You need to install sensors, many of which you'd install anyway for the modern lane-keeping and crash avoidance features. The sensors are not ruinously expensive and economy of scale incentivizes an automaker that makes both cheap and expensive cars to get these features out to the low end very quickly. As Animats pointed out, you really want to order this stuff by the million, not the thousand.

I guess I wasn't arguing with the crux of what you're saying - having a feature is generally going to be more expensive than not having it. The overstatement was very strong, especially in terms of timeline for reducing the costs of this functionality.

coldtea
Well, prototypes have already drove millions of miles like that.

(Whether they also had a steering wheel is of course irrelevant, what the parent means is that they can drive without nobody needing to use the steering wheel).

Tepix
All of the systems I've heard about need human intervention every couple of miles.
Animats
I expect to see little 25MPH self-driving cars running around senior communities quite soon. At 25MPH and below, sensor range required is low, data quality is good, and most problems can be dealt with by slamming on the brakes. That seemed to be the target market for Google's little bubble car. The problem there is price, not capability.
dheera
I am curious about what percentage of accidents are caused by the driver falling asleep, and if a lot of lives could be saved by a driver assistance system that self-drives with even the crudest of sensors until the driver is successfully woken up, which would probably be several seconds later.
samstave
At a minimum a self-stopping car should be the basic requirement, such that any vehicle which senses lack of input from the driver, or crazy input which is differential to the experience of its sensors, should be able to safely turn on a hazard alarm for all other surrounding vehicles and safely stop itself on the nearest shoulder.

If all sensors indicate that there is no emergency obstacle requiring sudden swerving or braking or what not, the car should be able to safely decelerate and stay in a lane etc... but this might be to complex a problem. (I was thinking if a driver may have a seizure or some other episode - but if a driver is under duress this may be a bad thing)

gjem97
I don't know why you're getting downvoted. I think watching highway death numbers is an excellent way to judge the success of self-driving cars.
Jun 17, 2017 · Animats on Tesla Autopilot
Compare Chris Urmson's SXSW talk from 2016 about Google's system.[1] Look carefully at the graphics which show what the sensors are detecting, and compare with Tesla's. Watch Urmson discuss hard cases, problems, and failures. Not seeing that with Tesla. Or Uber. Or Cruise. Or Otto.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

NicoJuicy
Otto is Uber now ;)
jaimex2
Cool, when is it being released?
Animats
Waymo's Early Rider Program is now running in Phoenix AZ.[1] They have soccer moms and their families in self-driving minivans.

[1] https://www.youtube.com/watch?v=6hbd8N3j7bg

espes
self-driving minivans with human safety drivers
resf
Tesla want a system that works all the time, even if that means it kills some pedestrians once in a while.

Waymo want a system that has a zero accident rate, even if that means it can't operate in the rain.

It's in his SXSW talk.[1] It's the Google/Waymo and Volvo position, those being the two companies closest to a product.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

unityByFreedom
2016 seems a bit late. Also, hindsight is 20/20 and I do nothing for self driving cars
Except that Google was able to handle construction sites, traffic cones, and people directing traffic last year. Watch Urmson's video from SXSW.[1] Google has put a lot of effort into handling unusual situations. Those cars driving on residential streets in Mountain View and Austin have accumulated data on the hard cases.

The California DMV continues to publish autonomous vehicle crash reports. The last one was in October.[2] A Google vehicle was rear-ended at 3mph when it was entering an intersection with poor visibility, detected cross-traffic, and stopped.

I'm looking forward to the 2016 self-driving disengagement reports appearing on on the California DMV site.[3] Those were due January 1, covering the year ending 1 December 2016. Then we'll have comparative info on how all the players are doing.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik [2] https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/auton... [3] https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disen...

jayjay71
I'm curious: when do you think a stage 3, 4, and 5 vehicle will be readily available on the market (specifically the United States)?
ghaff
My personal expectation is that Level 3 could be relatively soon given that companies like Volvo are doing legitimate pilots. Maybe 10 years? But the question would be how many places they would (legally) be allowed to drive without an attentive driver and under what conditions.

It's not clear to me how much L4 buys you as a consumer product. "No driver during defined use case." If you can't go door to door in a wide range of weather conditions, what's the difference between having a non-attentive driver and no driver?

ADDED: Yes, as was written elsewhere, low-speed campus use cases make sense. But that's not what most people consider self-driving.

Level 5. Many decades.

Animats
Level 3 on major freeways in about two years. (Like Tesla claimed to have, but without the "slams into cars parked at the left edge of the road" bug.) Level 4, in the form of self-driving low-speed vans in campus-type areas, has already been deployed in a few locations.

Volvo loaned a family a Level 3 car yesterday.[1]

[1] https://www.engadget.com/2017/01/09/volvo-tests-self-driving...

Dec 26, 2016 · 1 points, 0 comments · submitted by rfreytag
Dec 26, 2016 · Animats on How Self-Driving Cars Work
This article seems to be a summary of Chris Urmson's talk at SXSW.[1] That's worth a watch. It gives a good overview of how Google does it.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

Yeah, me too; I was in the 2005 Grand Challenge. Vision processing has made enormous progress since then. Here's Mobileye's guy explaining what they do and what needs to be done.[1] Here's the NVidia guy.[2] Here's Chris Urmson from Google.[3] All three use their vision system to draw boxes around things that look like obstacles. Then the planner uses those boxes to construct a path. The vision system just gives you a rectangular box in image space; the LIDAR based systems turn point clouds into simple 3D solids.

These vision systems do OK on things that look like typical road obstacles - cars and people. Other stuff, not so much. Only Urmson from Google talks about that much. The videos from the vision systems don't seem to be doing much with road edge clutter. They're not marking every post, fireplug, and low-hanging branch. Train a deep learning system on cars, trucks, pedestrians, and bicycles, and your system sees only cars, trucks, pedestrians, and bicycles.

These vision systems do a lot less than many people think they do. Watch the videos.

[1] https://www.youtube.com/watch?v=GZa9SlMHhQc [2] https://www.youtube.com/watch?v=2NGnvGS0AtQ [3] https://www.youtube.com/watch?v=Uj-rK8V-rik

kchoudhu
Aww yeah -- another Challenger! What team were you on?
Animats
I ran Team Overbot.[1] We were way overdesigned for off-road and underdesigned for going fast.

[1] http://www.overbot.com

kchoudhu
Same for us -- speed was not a consideration at all. We figured that if we could keep going and finish, we'd be one of the top 3 teams.

As it turned out, our mechanical engineering was great. What got us was software: we failed to free memory for passed obstacles, ran into memory exhaustion issues as a result and crashed out at mile 9. When we fixed the leak and reran the course, the truck finished in just over 7 hours.

It was an object lesson that garbage collection won't save you from memory-related issues. In retrospect, it was the also first time I ever encountered a million dollar bug. D'oh.

dzhiurgis
What was the second time?
kchoudhu
I ran across quite a few of them while working tech at an investment bank. Never anything that resulted in direct losses, but implied missed revenue -- definitely.
Google is definitely level 3. Their presentation at SXSW this year shows it,

https://youtu.be/Uj-rK8V-rik?t=17m54s

ocdtrekkie
Nobody's saying their cool little 3D renders aren't awesome looking, but that doesn't really mean much. Google has PR down to an art form, but if you asked Google to drive one of their cars to Chicago, they couldn't do it, no matter what the weather. They only work in a small nearly closed course environment.
studentrob
> if you asked Google to drive one of their cars to Chicago, they couldn't do it

Nobody here is making that claim.

If Google operated a taxi service within Austin, we would say the car operates at level 3. SAE levels say nothing about where the car is operating:

"At SAE Level 3, an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests"

Jul 17, 2016 · Animats on Misfortune
Google does a lot of driving in simulation, using data captured by real vehicles. They log all the raw sensor data on the real vehicles, so they can do a full playback. They do far more miles in simulation than they do in the real world. That's how they debug, and how they regression-test.

For near-misses, disconnects, poor decisions, and accidents, they replay the event in the simulator using the live data, and work on the software until that won't happen again. Analyzing problems which didn't rise to the level of an accident provides far more situations to analyze. They're not limited to just accidents.

See Chris Urmson's talk at SXSW, which has lots of playbacks of situations Google cars have encountered.[1]

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

On a related note, here's Chris Urmson's long talk at SXSW on how Google does automatic driving.[1] This is worth watching. He shows what Google cars sense and gives an overview of how they analyze the data. This is the first time Google has released this level of detail. If you're going to comment on automatic driving, you need to see this.

Urmson goes into great detail on Google's 2mph collision with a bus. The sensor data from the Google car is shown. The video from the bus (it had cameras, too) is shown. Exactly what the software was trying to do is discussed. The assumptions the software made about what the bus driver would do are discussed. What they did to prevent this from happening again is mentioned.

Most of the talk is about the hard cases. In the beginning, Google developed a highway driving system, but that was years ago. Now they're working hard on dealing with everything that can happen on a road, including someone in an electric wheelchair chasing ducks with a broom.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik

studentrob
Cool! Thank you for sharing this.

Here's the part where he starts talking about the 2mph bus accident [1]. He says the self-driving car should not have moved. He does not say why. He says they ran a lot of tests to make sure it will not happen again.

In plain English, I'd say, when the SDC is stopped on the side of the road, give sufficient time for traffic coming from behind to clear out before the SDC moves forward. Also, if an accident is imminent, being in a non-moving position is probably safer and less likely to cause liability for the SDC than moving. Plus, rolling over some sandbags is probably preferable to being hit by a bus.

[1] https://youtu.be/Uj-rK8V-rik?t=21m45s

This was a good presentation by the Google team: http://www.youtube.com/watch?v=Uj-rK8V-rik

I believe they say in there that they got highway driving down really early on. They have been focusing on urban driving for the last few years.

When you are barreling down a highway at 65 miles per hour and are not paying attention (and you wouldn't, because the car drives itself just fine), giving you controls is much more dangerous (for you and others around you) then not.

Urmson talks about it here: https://youtu.be/Uj-rK8V-rik?t=14m3s

Mar 29, 2016 · 1 points, 0 comments · submitted by msoad
Mar 29, 2016 · 3 points, 2 comments · submitted by msoad
visarga
Good to see this update. Sometimes it seems that very little is known about their progress.
sigmar
Really interesting talk. The accident with the bus is explained at 22 minutes. also- hilarious wheelchair duck chase at 26:15.
They do well in rain but snow remains a problem, not due to icy roads (at SXSW Chris Urmson said they do well on slick roads) but that the "better than GPS" LIDAR-based location tracking stops working when snow is on the ground.

The car simply cannot figure out where it is when there's a significant layer of snow on the ground because the reference maps stop looking like what the car is seeing.

https://youtu.be/Uj-rK8V-rik?t=46m22s

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.