HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Tesla Live Stream – Autonomy Day

livestream.tesla.com · 360 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention livestream.tesla.com's video "Tesla Live Stream – Autonomy Day".
Watch on livestream.tesla.com [↗]
livestream.tesla.com Summary
Tesla is accelerating the world's transition to sustainable energy, offering the safest, quickest electric cars on the road and integrated energy solutions. Tesla products work together to power your home and charge your electric car with clean energy, day and night.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Apr 22, 2019 · 360 points, 524 comments · submitted by kiddz
Traster
I'm sorry but I'm finding it really difficult to watch this and match up what the engineer is saying to what Elon Musk is saying.

For example, the engineer says the custom ASIC does 144 TOps for 2 chips vs the NVidia drive Xavier - does 21 TOps. Okay, well yeah I expect your custom ASIC does have a nice performance advantage over the equivalent GPU. at 3.5x advantage probably seems reasonable. Cue Elon Musk:

"At first seems improbable, how could it be that Tesla, who has never designed a chip before would design the best chip in the world but that is objectively what has occured. Not the best by a small margin, the best by a huge margin".

Mate, it's a dot product with some memory attached, and not a single detail your half hour deep dive has gone into suggests anything other than a bog standard ASIC.

"All the cars being produced right now have all the hardware necessary for full self-driving"

And this is where I'm totally lost. I want to believe! But he's lied so many times now. This man is sucking the credibility out of every engineer in the room. Don't repeat the same lie twice.

mrtron
I think the part that Elon, and the engineer, are trying to express that you are looking past is "A dot product with some memory attached" is what they need to do low latency image inference on the camera data.

They don't need a multipurpose CPU/GPU, they have a single well defined task that dominates their compute needs. They built a chip to do this that is cost and power effective, and redundant for safety.

Traster
Right, but Nokia's 5G backhual infrastructure is far more capable of doing Mobile backhaul than a GPU, you don't get the head of Nokia standing up in a press conference telling everyone his team has designed the best chip in the world.
dragontamer
> For example, the engineer says the custom ASIC does 144 TOps for 2 chips vs the NVidia drive Xavier - does 21 TOps. Okay, well yeah I expect your custom ASIC does have a nice performance advantage over the equivalent GPU. at 3.5x advantage probably seems reasonable. Cue Elon Musk:

Here's my issue with that. The on-chip SRAM is only 32MB, and the RAM is LPDDR4 rated at only 68GB/s.

Assuming a dot-product (multiply + add) over INT8 data, that's a limitation of 2-operations per 68GB/s (that the RAM moves at). Or 136 GIOPS (Giga-integer8 operations per second). You're limited by RAM, based on what I've seen in the presentation.

Unless their neural net is 32MB and fits entirely in on-chip SRAM. That seems unlikely to me...

grandmczeb
Convolutions have a much higher arithmetic intensity than dot products.
cromwellian
But in terms of CNNs, it's basically dot product is it not?Can you give an example of a CNN where it isn't?
dragontamer
He has a point. There's a bit amount of data-reuse in CNNs.

Hmm... it will depend on the CNN. There's probably a good neural network design that would take advantage of this architecture. IE: A well recycled convolutional layer that probably fits within the 32MB (load those weights once, use them across the whole picture).

So the whole NN doesn't necessarily have to fit inside of 32MB to be useful. But at least, large portions have to fit. (say, a 128x128 tile with 20 hidden-layers is only 300kB). Recycling that portion across the 1080 x 1920 input would be relatively cheap.

I herp-derped early on, there seem to be CNNs that would make good use of the architecture. Still, the memory bandwidth of that chip is very low, I'd expect GDDR6 or HBM2 to definitely be superior to the 68GBps LPDDR4 chip they put in there.

grandmczeb
In a conv layer, weights are shared across many input features. E.g. assume a 1x1 conv layer with a 28x28x3 input. You only need to load 3 weights even though there are effectively 28x28=748 different dot products. In practice, the input and output activations can be stored on chip as well (except for the first layer) which means the ratio of operations to DRAM accesses can be incredibly high. For some real world examples, take a look at the classic Eyeriss paper[1] which finds a ratio of 345-285 for AlexNet and VGG-16 respectively. You can also check out the TPU paper[2] which places the ratio at >1000 for some unnamed CNNs. Compare that to your analysis which yields a ratio of 2.

[1] https://people.csail.mit.edu/emer/papers/2017.01.jssc.eyeris...

[2] https://arxiv.org/pdf/1704.04760.pdf

Joky
I think the key thing is that you assume a single dot-product per RAM load. In general you get higher compute performance by doing more than this. You load weights for a layer in the SRAM, then stream from RAM tiles of data that you can compute multiple operations on.

In the same way, to reach peak FLOPs on a GPU you better use the local/shared memory as much as possible.

shaklee3
Nvidia already responded:

""Tesla was inaccurate in comparing its Full Self Driving computer at 144 TOPS of processing with Nvidia Drive Xavier at 21 TOPS," a spokesman said in an email. "The correct comparison would have been against Nvidia's full self-driving computer, Nvidia Drive AGX Pegasus, which delivers 320 TOPS for AI perception, localization and path planning." The statement also contends that "while Xavier delivers 30 TOPS of processing, Tesla erroneously stated that it delivers 21 TOPS. "

Robotbeat
From what I understand, Pegasus consumes about 500Watts, compared to under 100 Watts for Tesla's FSD computer. Elon in particular emphasized the performance per watt (as it's always possible to cram more chips to increase performance if you ignore cost and power consumption).

The comparison made in the video: 500Watts for an hour consumes about 2-3 miles of range. In a city in slow traffic, going 12mph, that's a significant range reduction. So you might have a 10% improvement in range for the Tesla ASIC in low speed conditions.

dragontamer
Its incomparable though. The Pegasus has far more compute power.

Just because the Pegasus has 500W worst-case TDP doesn't mean that its average case would be 500W constant. If you scale back your code and idle parts of the GPU, you can drop the energy cost arbitrarily.

At least, that's how GPUs on desktops work. They only use a ton of power if you give them a ton of work. Write your code in an energy-efficient manner, and the 2080 Ti will drop down to 20W, or scale all the way up to 300W.

https://www.tomshardware.com/reviews/nvidia-geforce-rtx-2080...

Modern chips idle very well. With the right code, the Pegasus could be tuned to only use 100W (assuming good enough programmers). But Tesla's chip will NEVER be able to scale above 100W.

Robotbeat
Pegasus's chip is more general purpose and doubtless has far more general purpose compute power. But that's irrelevant. What's relevant is Tesla's chip is optimized specifically for their NN pipeline whereas Pegasus is based off of general purpose GPU architecture, thus Tesla's chip achieves a better TOPS per Watt than Pegasus. And it'd be strange if it didn't.

"Tesla's chip will NEVER be able to scale above 100W" okay, based on what? Tesla has a higher performance chip in the pipeline right now, and they could've used more silicon to achieve more TOPS if they needed it.

EDIT: Pegasus has 500W at 320 TOPS. Tesla's has 72 Watts at 144TOPS. Thus Tesla's chip, because it's focuses specifically on Tesla's NN pipeline, is about to get almost 4 times to performance per watt of the Pegasus and is much cheaper. Tesla's NN chip wouldn't help your video game, and Tesla isn't intending to compete in all the markets Nvidia operates in.

dragontamer
> Pegasus has 500W at 320 TOPS. Tesla's has 72 Watts at 144TOPS.

Theoretical TOPS which can only ever execute within the 32MB SRAM that Tesla has created. Otherwise, Tesla's compute chip is stuck at 68 GBps LPDDR4 RAM. Pretty slow.

Pegasus uses HBM2 chips at 500GBps. Pegasus will be able to efficiently compute neural networks that are larger than 32MB in size.

Tesla is making big bets about this tiny 32MB SRAM. Bits and pieces of the CNN can fit in there, but almost certainly not the entire neural network.

You're right that this is a specialized chip. But even for NN / Deep Learning inference, it seems a bit underpowered to me from a RAM perspective

Robotbeat
Specmanship doesn't matter. What matters is how fast it's able to execute on the task at hand and for what cost in terms of purchase price and energy.

Nvidia's offering can be really good, and so can Tesla's.

Fuzzwah
>> how could it be that Tesla

> Mate, it's a dot product with some memory attached, and not a single detail your half hour deep dive has gone into suggests anything other than a bog standard ASIC.

There you go, you answered the question Musk put forward. Grats.

tigershark
Apparently it’s actually less than half the performance of Nvidia solution: https://www.marketwatch.com/story/nvidia-says-tesla-inaccura...

https://www.nvidia.com/en-gb/self-driving-cars/drive-platfor...

tomComb
I'm a fan of Tesla, but not Musk. You really can't believe much that comes out of his mouth. If you look at his past claims it becomes clear that he should just be ignored.
davej
> If you look at his past claims it becomes clear that he should just be ignored.

What's an example of this? His timelines are often much too aggressive for reality but he repeatedly delivers (eventually).

tomComb
The main example, of course, is in self-driving where he has most certainly not delivered. And he wasn't just a little off.

But also his claims about being able to automate the manufacture of Model 3 turned out to be very wrong and to do terrific damage to the company - they are/were way ahead with their technology and should have taken a less risky approach to getting that tech onto the market and taking advantage of that lead. And how did he not learn from the model X experience?

bobsil1
He's more credible on physics and hardware than software.
dwighttk
hasn't delivered full self driving (yet)
kurtisc
>he repeatedly delivers (eventually)

This is essentially unfalsifiable. In the case claims about FSD or a person on Mars are wrong then this statement can only be proven wrong when Elon Musk chooses to admit defeat - suffering a significant personal financial loss as a result - or when he retires.

willio58
Being the fan of a company while completely ignoring the ceo of said company is questionable to me.
tomComb
Tesla has great products, created and supported by thousands of people who appear to go above and beyond. Kudos to them. And kudos to Musk for the vision and the early execution, but that doesn't mean I have to like him or believe that he is the right person for this phase of the companies growth.
cma
Is it really an ASIC? I thought they said over and over it is full custom.
jakobegger
Definition from wikipedia:

> An application-specific integrated circuit (ASIC /ˈeɪsɪk/) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use.

Is there anything more "custom" than an ASIC?

cma
I suppose both are still called an ASIC. What I'm wondering is if it is gate-array/semi custom

https://en.wikipedia.org/wiki/Application-specific_integrate...

or full custom:

https://en.wikipedia.org/wiki/Application-specific_integrate...

ehmish
This doesn't seem wildly inconsistent with my understanding of Elon musk, in that he has a higher level of confidence of his understanding of the world than others would, and in a "you don't know what you don't know " way it seems he didn't know about custom how custom ASICs are much more efficient at specific tasks than a general purpose computer. Doesn't make him a bad person, and in fact something like that is exactly who you want leading a company trying to change the rules of the industry they're in, it's the job of the boots on the ground to exercise due diligence here.
unityByFreedom
Sorry, what? Is this an auto-generated comment?
unityByFreedom
I don't understand this response to the above comment. You write,

> it seems he didn't know about custom how custom ASICs are much more efficient at specific tasks than a general purpose computer

Yet, Musk is clearly stating his new chip is state-of-the-art. He is not underselling it.

> something like that is exactly who you want leading a company trying to change the rules of the industry they're in

You want someone in charge who does not understand the hardware he's touting?

> it's the job of the boots on the ground to exercise due diligence here

Who are you referring to? Employees and investors?

What, then, do you consider are the responsibilities of the CEO, if due diligence is not part of the job?

RivieraKid
I watch Tesla fairly closely from both the bull and bear sides. In short, I don't believe Tesla is anywhere close to Waymo. They won't achieve FSD in 2020.

It's important to see this event in the context of their significant demand and cash problems. Even enthusiastic Tesla investors like Galileo Russell suggested that Tesla is in a cash crunch and should raise money. Which hints at the main mystery about Tesla, why haven't they raised money yet?

Take a moment and think about why are they doing this event now. Elon is setting a stage for a capital raise, he's pitching the autonomy narrative after the Model 3 cash cow narrative failed. They're trying to convince investors (and customers) to give them money because money-printing autonomous taxi service is coming next year.

Also, think about why basically everyone except Tesla uses Lidars. Is it because they're stupid or because Tesla cannot use Lidars even if they wanted to?

P.S.: Nvidia issued a statement saying that Tesla's claims about their chip are incorrect: https://www.marketwatch.com/story/nvidia-says-tesla-inaccura...

Edit: In 2012, Waymo reached the milestone of handling 8 100-mile routes, specifically chosen to capture the full complexity of driving. I doubt Tesla is currently at that level. Source: https://events.technologyreview.com/video/watch/dmitri-dolgo...

hi5eyes
https://www.latimes.com/business/la-fi-hy-tesla-musk-panason...

the fact that suppliers don't like them has been a red flag for a while now

bulls and dreamers don't want to pay any attention to the very imminent problems with tesla's financials and furthermore tesla's executive record

there's a very good reason bears refer to him as fraudboy

but watching him scam taxpayers/unfortunately gullible people/govt (my fav ex. battery swap zev credits) while people fan over him and his company is pretty amazing

also the list of lawsuits...

lnanek2
Tesla needs to use Lidars too and frequently crashes into things Lidars would save them from (white trucks that loook like the sky, etc.). Unfortunately, Tesla promised that the cars they were shipping had all the equipment needed for self driving, so they can't say that publicly. Lidars would also bump up their cost by thousands, resulting in fewer sales.
Gys
You clearly did not watch the video stream
dblotsky
Aside from the timing of this event, why else are you convinced that Tesla is nowhere close to Waymo?
eanzenberg
2020? Try 2040.
antpls
If I were a billionaire and I wanted to change the world, I would put pressure on existing companies by creating a competing company and pretending extraordinary achievements.

Truth or not, failure or not, Musk moves forward the hopes and expectations, and I'm thankful just for that.

I don't see how we can blame him for taking position

computerex
Wow. This sums up the incredible attitude of some Musk followers. And I say follower because it is literally like a cult. To this individual, Musk can do no wrong.
Traster
What you're describing is securities fraud.
Robotbeat
Saying what your future goals are, while clearly and repeatedly reminding people of your past history of being late on those goals, is the opposite of fraud.
2bitencryption
...out of all the things I expected to see from a Tesla stream, I definitely did not expect a 25 minute discussion on the cost of 32bit additions, dot products, sram bandwidth, and chip design, at the level of a third year college hardware course.
DeonPenny
I keep saying this he knows what he doing. You can tell he knows what's happening. Which is rare for a CEO to know this level of detail.
navigatesol
Funny, I get the opposite impression; the more I hear Musk talk about technical specifications, the less I think he actually understands.
penny45
Can you elaborate on this and give us some specific examples?
selectodude
Which might explain why he's a crummy CEO. If your job is to strategize for the entire company and you're busy learning the minutiae that you spend a lot of money for other people to worry about, maybe you need to consider using your time a little better. If you want to build neural nets, keep your stock, fire yourself and go build neural nets.
DeonPenny
But then you have the major car company problem and the phone companies before steve jobs. You have people who don't know what's possible, don't know who are the best people to hire, and can guide a vision. if you know capacitive touch screens are possible no one has ever tried it makes it easier to implement. Same with google and how their CEO are all engineers. So pitching the CEO on AI project becomes easy
imtringued
Actually being aware of the technical details of your core business is a very useful skill to have as a CEO. But it doesn't seem to actually result in any benefits with Tesla. He probably knows the limitations of neural networks but that doesn't stop him from using the tried and true startup strategy of promising the world and delivering very late. Building a car that can stop on red traffic lights is something they should have had by 2016 already. But it's 2019 and their self driving technology they have demonstrated in their demo blatantly ignores the speed limits on the highway by roughly 10mph.
outworlder
> If your job is to strategize for the entire company and you're busy learning the minutiae that you spend a lot of money for other people to worry about, maybe you need to consider using your time a little better

This is what everyone that is not Tesla or SpaceX are doing. And have been doing for a long time. If the CEO is not an engineer at heart, what are they?

I seriously doubt Elon Musk has more engineering knowledge than the people he hires on their specific fields. However, he can make pretty well informed strategic decisions if he knows WTF the engineers are talking about without taking their word – not even that, as explanations have to be dumbed down.

This is not a new thing. Bill Gates was like that (1). Steve Jobs was no dummy and had an engineering background, but not at the same level – he did parter with a genius engineer, however.

I think Musk is doing the right thing.

> Which might explain why he's a crummy CEO

That's quite debatable, I'd say. Isn't he getting results?

(1) https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

village-idiot
> This is what everyone that is not Tesla or SpaceX are doing.

I think it's very important to point out here that Elon does not run SpaceX. It's well known that he's a front man for that company, and that all day to day operations are run by an actual executive who does the job of being an executive. This is why SpaceX is doing much better than Tesla.

selectodude
>That's quite debatable, I'd say. Isn't he getting results?

Sure, if you call burning billions of dollars a result.

cjhopman
> This is what everyone that is not Tesla or SpaceX are doing.

If your goal is to make money, I'd follow anyone but Tesla or SpaceX's example.

breakyerself
How is SpaceX a bad example? They've managed to undercut everyone in the space launch sector while bringing in a larger profit per launch.
cjhopman
SpaceX isn't public or open about their financials, but there's a lot of evidence that they are simply losing money. Ex. they were raising a lot of money last year, profitable companies don't do that. Also read https://www.theverge.com/2018/12/7/18129539/spacex-falcon-9-...
breakyerself
Yeah that article seems like a lot of tea leaf reading to me. All evidence points to SpaceX building orbital rockets cheaper than just about anyone else before reuse, but even if not reuse should drop their costs well below anyone else until blue origin comes online or until ULA decides to shake off the crust of old age. It's not at all surprising that investors are told not to expect to take a profit for 15 years because SpaceX is focused on growth and development. I wouldn't expect them to reinvest any less than 100% of their profits at this time. It makes just as much sense to assume they raised money to keep development moving at the pace they want which wasn't fast enough using profits alone.

Despite the fact that I own a Model 3 and own shares in Tesla I often find the critics make good points and just hope for the best anyways because I want electric cars to take over and I want self driving cars to develop sooner than later because driving is dangerous and tedious.

The SpaceX hate seems much less justified to me. SpaceX has been building launching, landing, and reusing rockets for years now. They've ushered in a new lower cost space age.

toomuchtodo
Capital markets are the target audience. You must build the foundation to properly communicate the value proposition to investors, thereby justifying the company's valuation.
mattrp
If you’ve listened to their earnings calls this is par for the course. Say what you will about his tweets, but I find these discussions fantastically informative and transparent.
isoprophlex
absolutely glorious, they're taking questions. first question: can you use anything else besides ReLU activations in your neural net engine?

this is wonderful

2bitencryption
what a funny question.

"you say you can put bananas in your smoothie. just a thought, but perhaps, might I ask, do you have the capability to put strawberries as well, or are we not there yet?"

pixelpp
haha
chrisa
That was my first instinct to that question as well, but I think he was just asking if ReLU was required by the hardware design - or if it was possible to use other activation functions as well. If the the ReLU was part of the hardware itself somehow, then it wouldn't be possible to use tanh or sigmoid (which may be better in certain situations); so I think he was just asking if ReLU was required, or if there was flexibility allowed in the activation function.
2bitencryption
ah, makes sense. guess I was the fool :P
akhilcacharya
...is it?

I don't know the context here but it feels like they're trying to ML-splain Karpathy?

cr0sh
I'm not a NN expert, but based on what I have found, the point of using RELU instead of other activation functions is what is called the problem of "vanishing gradients".

Basically (IIRC), during backprop the error difference gets ever smaller the further back in layers you go, ultimately getting "lost in the noise", making learning in the earlier layers more difficult to impossible.

I'm not saying RELU is the only option to make this work, or that it's the only activation function that provides a "fix" for the issue; I'm sure there are other ways to deal with vanishing gradients that I don't know about.

I also lack the mathematical knowledge as to why RELU helps in this manner, but I suspect something having to do with the lack of "asymptotic structure" approaching the extremes (I don't know what the proper term would be). Or maybe it allows for some form of "forgetting", in the prevention of multiplying very small numbers (such values just go to zero ultimately)?

Maybe someone else here with the knowledge can explain it better, and we can both learn...?

Qub3d
ReLu is useful, but the question as asked was more about other (also essential) functions for other steps in RNNs -- in particular, sigmoid "squash" functions, as well as MaxPooling.
grecy
Holy cow Elon said a lot of things that will drastically change the world in the next 2-3-5 years. I think it's very, very clear they've been doing a lot of stuff behind that scenes that nobody has given them credit for over these last 5 years. Everyone thought they were just throwing everything at the Model 3 ramp and doing nothing else, but it turns out the opposite is true.

At the end Elon said autonomy is basically their entire expense sheet!

Whether or not Tesla can pull it off is obviously going to be an enormous topic of debate with haters and lovers on each side.

This is super, super exciting. I'm going to grab the popcorn and enjoy watching Tesla try. Whether they succeed or fail I admire them aiming so high, and planning so far ahead.

RivieraKid
It's so frustrating to see that people eat Elon's BS. Even after:

- the Unsworth lie

- the "zero concern and I mean zero about 10k Model 3s per week in Dec 2018" lie

- the "funding secured" and "only uncertainty is shareholder approval" lies

- the "short burn of the century" lies

I mean, they were even wrong about Nvidia's chip today, Nvidia had to issue a statement: https://www.marketwatch.com/story/nvidia-says-tesla-inaccura...

mlindner
There was no "Unsworth lie". There was Elon saying some very unkind things after Unsworth personally attacked him. Neither thing is appropriate, but let's not pretend this wasn't a tit-for-tat.
RivieraKid
He purposefully framed it as a seriously meant allegation, not as an insult. And then even doubled down on it.
reitzensteinm
When plotted on a logarithmic scale.
justapassenger
There's huuuge difference between telling someone to stick something up their butt VS repeating, multiple times, to millions of people, that someone is a pedophile and you have proof of that.
lonnyk
The tech specs of AGX Pegasus say they're 130 TOPS[1]. Or am I misunderstanding something?

[1] https://developer.nvidia.com/drive/drive-agx

mdorazio
Where are you getting 130 from that page? I don't see it. On this other page [1], it clearly says 320. "NVIDIA DRIVE AGX Pegasus™ achieves an unprecedented 320 TOPS"

[1] https://www.nvidia.com/en-us/self-driving-cars/drive-platfor...

lonnyk
If you click “tech specs (click to expand)” then it says it on there.

Thanks for your link though. I couldn’t find 320 anywhere

sharadov
And I think they are far ahead of the game compared to anyone else w.r.t autonomous driving, they have a fleet sending them data back real time.
rootusrootus
If the current state of autopilot is even tangentially related to their progress on full self-driving, then it's a lot more than a few years away.
jfoster
Which car available for purchase would you recommend looking at for something closer to full self-driving?
danhak
How recent is your experience? I’ve owned a Model 3 for over a year and the pace at which autopilot has been improving is staggering.
rootusrootus
I did a road trip this weekend and used autopilot for more than 400 miles on a brand new P3D (with the FSD option, limited though it is right now). Mostly freeway, some rural roads. Here are my observations, I assume it's pretty typical but obviously all I have is anecdata.

Lane following works effectively, but it wanders a bit. Mostly not enough to be bothersome, though sometimes it will hug one side or the other close enough to make me think about the big truck next to me. Sometimes a little late reacting to a curve, though not to the point of going across the line. Planned lane changes work pretty nicely. Identification of vehicles around the car is good towards the front and noticeably spotty to the rear quadrants. It reliably misidentifies the merging traffic lane as one big lane as soon as the line ends, and promptly drives to the right to straddle the 'middle' and then corrects as the line converges to the normal width.

Adaptive cruise control is flat out disappointing. Granted, I've never driven anyone else's automation, but it actually made me motion sick at times. I thought it did a really admirable job of handling people merging right in front of the nose of my car without jamming the brakes. Just smoothly slipped back until the right following distance was achieved. But in open traffic, when I came up on another car, it would abruptly slow down a bit too aggressively, and then fiddle with the speed a bit until it matched the leading car. If the lead car switched lanes, it waited until several seconds after the car was in the next lane and then abruptly sped up to the programmed max speed. There were times I could tell how frustrating this style was to people behind us who were not using a similar system.

Twice it braked suddenly, though not any kind of panic stop, and then immediately sped back up. Like it saw something concerning and then changed it's mind a split second later. Made my wife a little nervous after the second time, but it didn't happen again the rest of our trip.

Overall I'd classify it as a moderately drunk driver :). I let it drive us into a couple moderately uncomfortable merge situations just to see how it would deal with traffic entering the freeway, and while it didn't get us in a wreck or anything, it didn't make me feel like AP is on the verge of becoming viable FSD.

For what it's worth, this was on I-5 in between Portland and Seattle, in mostly light traffic. We have what are, in my experience, pretty easy relaxed freeway configurations. Long ramps, no ramp lights, etc.

I borrowed the P3D so I could have myself an extended test drive to decide if I really wanted a Model 3 or not. Wicked fast (which is still an understatement), I love the power but would probably choose to steer it myself for the forseable future. I wish they'd put in nicer seats (like, my legs were getting numb at one point!), just standard Recaros like everyone else would be perfectly nice. I think I've decided that I really like it but I'm going to wait for Model Y before pulling the trigger. I will appreciate the additional head space and seat height, and hoping that in the meantime they do v3 or v4 of the seats.

zaroth
Thank you for taking the time to write up such a nice trip report!

Some of your experiences are similar to mine. I think you have seen the improved merge handling, in prior versions zipper merging in heavy traffic had me disengaging due to getting too close to the merging car coming into my lane.

I have also experienced the unexpected brake pulse during adaptive cruise on a couple occasions and would really like to have seen some indication on the guidance display of what caused it. The delay when a car moves out of lane (or turns right onto a side street ahead of you) is definitely too long. The stop and go following however I think is extremely smooth, and I commute heavy highway traffic 3-4 days a week and love it. I think the amount of wheel-touch requests from AP has decreased significantly on the highway when speed is low / in traffic, I had maybe 2 or 3 touche requests in 20 minutes of heavy traffic this afternoon, although I may have adjusted volume and skipped tracks in between a couple times (which counts).

I wonder if the seats have changed? I find the drivers seat extremely comfortable. I know the back seats were revised since TM3 started shipping. I love the lack of a model year, they are always refining and enhancing the base hardware.

Marsymars
> I love the lack of a model year, they are always refining and enhancing the base hardware.

If I have a list of a half-dozen refinements I want which weren't included in the car at launch, how can I tell if a given used car for sale has them? Is there any type of master list of features/changes/dates?

inerte
I actually took a ride on Waymo in Tempe, AZ a few weeks ago. Opened an app, summoned a car, driver never touched the wheel, got to my destination in 15 minutes. What makes you say Tesla is far ahead of that?
justapassenger
Quick look at their earnings shows it's not true. With amount of cars they have, even sending tiny % of it back home would require Tesla to spend billions on datacetners.
gamblor956
This is definitely not true. The engineer explicitly stated that Tesla only gets data when trigger conditions are met (and that this was in fact always the case), so most of the miles that Tesla drivers have been driving was actually never even sent back to the mothership for use in ML training.

IOW, despite Elon claiming for years that they're ahead because of the "millions of miles" of free data collection by Tesla drivers, it turns out once the engineers on the ground actually speak...they're actually pretty far behind on the data front, and most of the data they are getting is the same safe routes over and over again since they're not trying to put Autopilot into novel situations to acquire new data or test the ML algorithm's capabilities to handle unique/novel situations (like trucks turning in the highway, or freeway dividers).

identity_zero
Isn't the "millions of miles of free data" the edge cases? Yes, they're not literally streaming back everything but they have millions of miles with people driving auto-pilot and sending back a significant amount of edge case scenarios, which is the truly valuable information. Sending back data of "successful auto-pilot" can be sending back 10 hours of driving on a straight two-lane road; which seems quite useless.

They also have cars all over the planet while Waymo seems to only be tested in a few cities. I can't find the podcast, but Elon mentioned how one of the main difficulties is the varying types of intersections. There are only so many intersections in Mountain View. But Tesla is getting data for all kinds of intersections.

andrewtbham
I have wondered about this myself... and I am working on an CV ML project myself... but data that is correctly predicted is not not as useful.. what you want are edge cases, images with low confidence, and errors. So when users take over and disengage, that is useful data. when the system beeps and forces a disengagement.. this is the data you need. That has been my experience and I'm glad to hear them validate that. Your training data needs diversity.
dreaminvm
They cherry-pick rare cases and use their fleet to get more examples of these situations. This seems like the right approach given more miles following the same car in a straight line is pretty useless.

My takeaway from the presentation is that Tesla will perform better than other companies in this space (although I don't know enough about Waymo to comment) due to the following:

-You want a large dataset (Tesla and many companies have this and can simulate) -You want a varied/diverse dataset (Tesla and many companies have this and can simulate)--the point here is simulations for simple cases work (you can only simulate when you know), but for complex ones are close to the difficulty of actual FSD -You want a real dataset (Tesla is the only company who can say this and can say they have data on how X00Ks of drivers will handle these situations)

gamblor956
My point was that it appears that Tesla doesn't have a large and varied dataset. It has a small and pre-selected data set, since the cars only transmit data when pre-determined triggers are fired. Thus, it doesn't matter how many 1000s of drivers Tesla "has" or how many "situations" they're in, since it's not actually collecting data from most of these situations.

And Autpilot's performance (including its numerous regressions) suggests very strongly either that it doesn't have a very large data set, or else that it has a large data set of everyone doing roughly the same thing almost all of the time. These are the two most logical explanations for Autopilot's tendency to veer toward freeway dividers even (especially?) after updates.

millettjon
Shadow mode would also transmit data no?
Joky
> My point was that it appears that Tesla doesn't have a large and varied dataset. It has a small and pre-selected data set,

I don't thinks it is a fair characterization: first the notion of "large" is fairly subjective. But more importantly the fact that they collect data on pre-determined triggers is just a guarantee that the dataset is not over-fitted (let say to the 280 and 101 in the Bay Area and to Elon's commute in LA) and instead has good coverage of the world.

Their capabilities of triggering on situation allows them to grow the dataset quickly in a supervised way. It is all about the granularity of these triggers, imagine that you can express "collect situation in tunnels with jerk higher than X m/s3" or "collect all lane change abort in snow condition", ... In the next 24h you get data from all over the world and this data is automatically tagged and classified by the neural net.

toomuchtodo
It makes a lot of sense. Everyone looked at Tesla and thought, "How are you going to scale up like every automaker that builds and sells millions of vehicles a year?"

"We're replacing miles with electric miles, not vehicles with electric vehicles".

opportune
or he's setting overly optimistic timelines at usual. But I would love to get excited this easily
gamblor956
None of what Elon said will change the world in the next 5 years...

It's also pretty clear based on the selective presentation of data that they're even further behind Waymo than everyone thought. This was the time to actually prove that their ahead, and all they could give was more of Elon's hot air.

At the end Elon said autonomy is basically their entire expense sheet!

Must have missed that part, but if he actually said that it would have been a material misrepresentation of Tesla's financials, since they supposedly made a chunk of cars in Q1...OTOH, if the statement is true and Tesla basically made no cars in Q1, it would explain by Panasonic is refusing to make additional investments in the Gigafactory.

Whether they succeed or fail I admire them aiming so high, and planning so far ahead.

Other companies are both capable of aiming high, planning, and succeeding. Tesla has yet to demonstrate the capacity to do items 2 and 3 on this list. Case in point: pretty much everything about the Model 3 launch, from building to distribution, the solar roof panel (and really, the entire SolarCity acquisition). I'm even going to throw BoringCo in there since it's based out of the Tesla lot and uses Tesla vehicles and engineers.

dougmwne
I took it as weasel words. The robotaxi business will be the vertically integrated capstone of everything Tesla does, therefore 100% of their expenditures are building towards the robotaxi.
vkou
And I was told for years by HN that actually, Tesla is a battery company.

Now it's a robotaxi company? What is it going to be next year, a debt restructuring company?

grecy
> None of what Elon said will change the world in the next 5 years...

Oh come on! If even half of what he says comes true in the next five years the world will not be the same.

Entire car rated to 1 million miles (including the battery pack) with minimal servicing

Level five autonomy at the end of 2020. Remove steering wheels and pedals from cars a year after that.

After robo-fleet goes live Tesla will be "massively profitable"

Cars that today cost $38k will make $300k + in the next 11 years for their owners in the fleet

etc.

> Other companies are both capable of aiming high, planning, and succeeding

He mentioned that all their cars since 2016 have had redundant power steering, redundant wiring and even redundant power. Even if it loses the main power pack it can still steer and brake safely.

Give me ONE example of another auto manufacturer that has thought far enough ahead to design that kind of redundancy into their vehicles planning for self driving? It simply does not exist in the industry.

The man really is seeing the future and trying his best to make it happen.

Like I said there will be a huge debate about if he can actually do it.... I'm just saying that if he can it's going to be a very exciting time to be alive

dwighttk
>Oh come on! If even half of what he says comes true in the next five years the world will not be the same.

I could say 20 things that if even 1 comes true in the next five years the world will not be the same!

grecy
You could, but you're not a billionaire running multiple companies that have already done a ton of things that many, many experts said could not be done.

Also note, I'm not saying Elon will do all or even half of them. I'm saying it's going to be damn exciting to watch, and I'm keeping my fingers crossed that he does, because I'd like the world to get better, and I'm happy to be an optimist.

justapassenger
> You could, but you're not a billionaire running multiple companies that have already done a ton of things that many, many experts said could not be done.

Like what? I often hear people saying that experts were saying it's impossible to land a rocket or make a good EV car. But I didn't hear experts saying that.

What I did hear, is experts saying it's impossible to make it in cost effective way. And for that - economy for spacex is still to be proven. Tesla is still bleeding money (apart from occasional financially engineered quarters).

That being said - while I think most of his companies are doomed to fail (due to gross mismanagement), I'm happy Elon is around. He's lying a lot, and always overestimating by an order of magnitude what he can deliver. But he does inspire people to go into science and engineering - spacex landing rockets are saturn 5 or space shuttle of this generation.

gamblor956
I think SpaceX will stick around as long as Shotwell remains COO. She's done a great job of running it. All of SpaceX's disasters can be traced to the pre-Shotwell period, or to Elon interfering in her management (such as this past weekend's rushed crew capsule test).
gamblor956
He mentioned that all their cars since 2016 have had redundant power steering, redundant wiring and even redundant power. Even if it loses the main power pack it can still steer and brake safely.

Given that so many Tesla owners have complained about the power steering and wiring going back (and the delays in getting these issues fixed) the most likely explanation is that Tesla is attempting to forestall maintenance issues by simply adding built-in backups rather than by improving quality control and making the primaries work better.

IOW: having built-in redundant parts for some of the most important parts of the car--which aren't known for breaking down in cars from other manufacturers--is actually a sign of failure on Tesla's part, not a sign of forward-thinking.

Entire car rated to 1 million miles (including the battery pack) with minimal servicing

That's nice. Tesla can't manage to get car panels aligned properly, and a number of cars are still awaiting parts a year or more after they went in for service, and Tesla hasn't managed to fix its own billion-dollar factory line to work properly...but in 5 years they're promising better-made cars than any other car maker? Toyoto has earned the right to make that claim with their decades of consistency. Tesla hasn't, especially not after the Model 3 fiascos.

Level five autonomy at the end of 2020. Remove steering wheels and pedals from cars a year after that.

Call me when they manage to actually stop their cars from veering toward freeway dividers. It was still happening as of November 2018... Also, there's the pesky little problem of Tesla not yet having demonstrated a door-to-door trip yet in uncontrolled conditions despite promising that back in 2016.

After robo-fleet goes live Tesla will be "massively profitable"

See above. Need to hit Level 5 before then. But based on all the lots filled with unpurchased Model 3s, they'll probably have the fleet for it if they ever achieve Level 5...

Animats
So far, half an hour from the head of IC design. Nice special purpose IC and board. Dual everything for redundancy. Wide special purpose neural net evaluation. 100 watts for the compute system. Code signing. Shipping in new Model 3 cars since last 10 days.

"All we need to do is improve the software" - Musk [12:07 PDT]

LIDAR "unnecessary" - Musk [12:13 PDT]

Computer vision guy is now speaking.

Recognizes "driveable space", not just obstacles. Video shown, but just for a freeway. This is crucial to safety. Need to see this is a cluttered environment.

isoprophlex
"LIDAR is a fool's errand. Everyone relying on LIDAR is doomed." edit: beat me to it
vasilipupkin
it snowed a few weekends ago in Chicago, my autopilot turned off because snow covered up the cameras. So I am not buying all this "self driving with no lidar" brouhaha
mhb
OK but snow blocks lidar too.
penny45
snow might ruin it for everyone. Winter is coming, and we're all screwed...
buildbot
Snow completely disables lidar as well if I recall correctly. Only radar will work in a snowstorm.
slg
Doesn't LIDAR has similar problems with precipitation? Regardless of what solution proves to be the best, there is going to be a decent amount of time between when a self driving car can handle most scenarios and when it can handle all possible scenarios.
Obi_Juan_Kenobi
Water affects certain wavelengths, and there's been good progress on LIDAR that uses wavelengths that aren't effected.

Most approaches want to use a wavelength that has as little ambient light as possible. That light is absent precisely because it is absorbed by water vapor in the atmosphere. The alternative approach must deal with lower signal:noise, but is not substantially affected by weather.

semi-extrinsic
Here's a snow cover map of North America on a random day in January:

https://www.natice.noaa.gov/pub/ims/ims_v3/ims_gif/ARCHIVE/U...

Here is Europe and Asia on the same day:

https://www.natice.noaa.gov/pub/ims/ims_v3/ims_gif/ARCHIVE/E...

For someone who lives in or close to that white part, "most scenarios" includes snow. Arguably for Level 4 autonomous cars in those areas, you either need to fully disable autonomy in September and enable it again in late April, or you'll need to handle snow.

slg
You are comparing two different things. Old snow on the ground (your images) is not the same thing as active or recent snow accumulation on the vehicle (the original comment). It is completely reasonable for the car to disable autonomy due to snow accumulation until someone clears it off the vehicle. That says nothing about the autonomy of the vehicle with snow cover on the ground.
semi-extrinsic
I'm assuming Level 4 is what Musk means by Full Self Driving. The bar to be passed is then

"No driver attention is ever required for safety (...) self-driving is only supported in limited circumstances (e.g. geofencing), and when these circumstances are no longer met the vehicle must be able to safely abort the trip, e.g. park the car, if the driver does not retake control."

How would that work if you're out driving on the highway, and it starts snowing hard so the car can't see anything? Just park on the highway?

If your car can't safely handle such a scenario, the automous feature would have to be "season-fenced" in addition to geofenced.

slg
It isn’t like it all cameras simultaneously go from 0% obstruction to 100% obstruction instantaneously. The car should pull off the highway and park or at least pull to the shoulder and park once it has identified decreased visibility to an extent that might impact safety.
nradov
Snowstorms can last for days.

Freeway shoulders are not safe places to stop. You're likely to get hit by a drunk or distracted driver. Ask any police officer or tow truck driver. That's why stopping on the shoulder is only for emergencies.

semi-extrinsic
It kinda is though. Snow deposition is a function of surface temperature, which is uniform across the sensor. Hit the wrong initial temp when you enter a blizzard, and deposition takes your vision in seconds. It can be hard enough to see out the windshield which has heating, wipers and anti-freeze wiper fluid fighting for it.
slg
Do you realize how far you have moved the goalposts during this conversation? You started out with the suggestion that they "fully disable autonomy in September and enable it again in late April" and now you are talking about situations with "the wrong initial temp when you enter a blizzard".

I will simply refer back to my initial comment in this thread: "there is going to be a decent amount of time between when a self driving car can handle most scenarios and when it can handle all possible scenarios". Blizzards are not in the "most scenarios" group. Barring emergencies, no one should be driving in a blizzard let alone an autonomous car.

derpherp
I don't think OP has moved the goalposts at all. LIDAR can work in snow but cameras get covered. And yes you are right, there will be a decent time between testing in some basic scenarios and testing in all scenarios. But I do not understand why Mr. Musk is trying to reinvent the wheel and rely only on vision when LIDAR has already shown that it works better.

https://www.cambridge.org/core/journals/journal-of-glaciolog...

dragonwriter
> But I do not understand why Mr. Musk is trying to reinvent the wheel and rely only on vision when LIDAR has already shown that it works better.

Because he's sold a bunch of cars (and a bunch of stock in the company selling the cars) without LIDAR with the explicit claim they are hardware-ready for full self-driving.

tigershark
Lidar is useless when snowing
ubercow13
Can snow cover lidars?
asteli
Depends on the unit. On some designs (e.g. Velodyne HDL-64) the unit is split into two sections. The whole upper section, along with all the optics and some of the electronics, spins at 10-15 Hz, which naturally tends to shed precipitation and other crud.

Sensors with non-moving optical windows can use other strategies, such as air knives or wipers.

mrtron
Not cleaning snow off of your vehicle before driving is crazy.
oska
And would you drive if snow covered your windscreen? Or would you clean it? Which is obviously possible for cameras too.
jsharf
The windscreen has wipers, on the Tesla model 3, I don't think all the sensors have wipers... so that's an issue.
oska
Yes, we implemented simple physical solutions such as wipers and window heaters in cars to maintain vision for drivers. What we didn't do was implement lidar to aid human drivers.

Simple physical solutions are available if necessary for the cameras too. I don't have a tesla so I don't know what measures they use to keep the cameras clean but I'm sure they've thought about it and designed for it.

pwinnski
Great-grandparent comment suggests that they did not, in fact, design for it. Assuming the best is nice when you have no information, but we have information.
Animats
It's not that hard.

In our 2005 DARPA Grand Challenge vehicle, we had a system to clean the camera and LIDAR. It used a spray nozzle, and alternated spraying windshield cleaner and air. This is a commercial truck accessory used on mining trucks.

oska
I'm not assuming the best, I'm assuming basics. We've just watched a presentation where Tesla put immense resources and talent into designing a state of the art vision processing chip. Do you think they're going to let it be rendered useless by not implementing simple physical measures to keep vision coming in from the cameras? (What measures are already there plus easy opportunity for improvement). Of course not. Such a line of argumentation is transparently bogus. I don't know why the top level commenter's autopilot turned off in the snow but trying to argue that that shows that snow renders cameras useless and makes lidar necessary is stupid. Sorry for the strong language but really, I don't know how else to put it.
ClassyJacket
I have seen recent Teslas in person in the showrooms, and there doesn't appear to be any wiper for the cameras. Especially with a firsthand owner account just a few comments up I think it's actually only logical at this point to assume they didn't put wipers on the cameras.
oska
I would think that heating the glass (and surrounding casing) would be a more effective measure for keeping a camera clear of snow.
cjhopman
Well they didn't do that either. They didn't do anything for it. But don't worry, FSD is just around the corner and the cars have all the hardware necessary, it's just a software problem now. And this time it's different than in 2016 when they said they had all the hardware they needed. And it's different from 2015 when they said the FSD was coming in 2016.
greedo
The basics. Like having a trunk lid design that sheds water properly?

https://teslamotorsclub.com/tmc/threads/water-dripping-into-...

s_y_n_t_a_x
Shut down the company the trunk is slightly off.

Tesla is new to the game, give them a few more cars to catch small stuff like that.

greedo
Tesla has been around for 16 years, so I think it's not too critical to expect them to be able to sweat the small stuff.
s_y_n_t_a_x
16 years is a tiny fraction compared to the other car companies you're comparing it to.

But ask yourself, what's more likely, the other car companies catch up to their AI and battery tech, or Tesla fixes the trunk in the next model?

stefan_
This talk of a "cut-in" detector is scaring me. It's like they lack any sort of higher level planning and decision making (which notoriously is not a neural network of any sort).
Animats
Musk says there will be a driving demo later today. Up next, the neural nets person.
dwighttk
how did the demo go? Is there separate video?
foobiekr
the narrative in this conference seems, minus the exaggeration, to follow exactly the flow for the presentations given by the MobilEye CTO for the last six years.
6d6b73
Having two GPUs on one board is not redundancy.
est31
> "All we need to do is improve the software" - Musk [12:07 PDT]

That's an old claim, see this 2016 press release [1]:

> as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.

At this point I believe the claim once I see the self-driving functionality having been rolled out to the public and the accident number been reduced.

[1]: https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

jacquesm
The thing I don't get about Tesla is why they keep heaping more on their plate rather than just (for large values of just) re-doing the motive power bit. That alone is a massive challenge and it is not something they've licked at economies of scale that allow a $10K vehicle to be brought to market, which is what it will take to really make this a success.

Every extra they tack on to the base product is one they will be expected to deliver on as well diverting attention from the main problem they are dealing with, which may in the longer term leave open enough room for the competition to wiggle through.

Sure, autonomy is a big deal, but it is also something that will once cracked be an instant commodity and there is a lot of money and talent focused on that particular problem, which is surprisingly hard to do well.

If Tesla ends up going under because of one of the side shows (Solar City, autonomy, Power Wall etc) that would be a serious loss.

_ph_
First of all, a lot of these things are orthogonal. With that I mean, that they are independant from each other. The self-driving effort interacted only minimally with getting the Model 3 production up and running. As long as the money does not run out, it makes sense to do things in parallel, which are parallizeable. On the other side, Musk has a big picture in mind, which stretches quite a bit into the future. So these "side shows" actually fit together. You use solar cells to produce electricity, which you store in your power wall which can power your electric car.

If Musks ideas about self-driving actually become true, he holds the holy grail of the car industry. On one side, self-driving abilities become crucial in future car sales. On the other side, if he can make the Tesla car sharing network happen, the revenue of that could be a magnitude or more larger than the one of car selling. This would justify some of the crazier valuations of Tesla stock.

the8472
> Sure, autonomy is a big deal, but it is also something that will once cracked be an instant commodity

Not if the company that cracks it eats its competitors or, assuming it's like waymo, commoditizes car manufacturers into suppliers for autonomous taxi fleets.

WhompingWindows
How would power wall lead to them going under? They would just be selling excess batteries, which seems to be one of their advantages over competitors. Autonomy, I'm ambivalent on it, they can charge customers for it right now and many will pay.

If anything, the solar city/roof shingles side-show has seemed the worst to me personally, alongside the falcon-wing doors and the self-infliction of twitter harm by the CEO.

dominotw
> once cracked be an instant commodity

Can't they patent their breakthroughs? How is this different than pharma breakthroughs?

dforrestwilson
“diverting attention from the main problem they are dealing with, which may in the longer term leave open enough room for the competition to wiggle through.”

Perhaps this is the intent rather than a side-effect.

joshlegs
Indeed, Musk has encouraged competitors from the start. I believe they open sourced a lot of their patents in an effort to encourage competition in the electric vehicle space.
dforrestwilson
I was actually referring to the first part of the quote.

Intentional misdirection to distract investors.

leesec
>but it is also something that will once cracked be an instant commodity

Definitely not and I'm not sure why you think this.

The data pipeline advantage is a big part of this talk.

bhauer
My thinking is that they already have shown product maturity with the basics of electric mobility. Sure, they can incrementally improve that and it appears they are doing so. But I'm not seeing that as an area with clear and obvious major hurdles still to clear.

I for one am glad they're going after other innovations to broaden their total offering.

hebrox
I wouldn't have bought my Model S if it wasn't for Auto Pilot and the promise of Full Self Drive
danhak
They're fundamentally a Silicon Valley company. It's in their DNA to try and devour the world through technology, not race to the bottom on cost. And fundamentally, developing a product that is objectively and unmistakably more advanced than the competition helps the core mission of accelerating the transition away from conventional automobiles.
JustSomeNobody
> The thing I don't get about Tesla is why they keep heaping more on their plate...

Musk's ego plays a lot into this for better or worse.

xedeon
Two words: Vertical Integration.

Apple's work on developing their own SOC (A13) have paid dividends for them. The same legendary chip designer (Jim Keller) designed the new Tesla chip after working for Apple.

justapassenger
For vertical integration to work you need to have funds to support it long term. Apple has it. Tesla - not that much (at least not in the current state)
xedeon
> For vertical integration to work you need to have funds to support it long term.

Elaborate.

justapassenger
Building one-off solution that will work good for your special use-case is hard, but in a doable territory. But cutting off fat that you don't need for your use-case is a one-off gain.

Building whole process that will keep on producing solutions that are better than what suppliers can offer, especially in highly competitive area, is way way harder. That's where the real money are spent.

xedeon
From the video presentation, it did not seem to be a "one-off" effort. They mentioned that V2 was already 50% complete and will continue to be developed over time to avoid complacency and stay competitive while also keeping the cost at a manageable level.

I get what you're saying though. But if they manage to pull off the Robotaxi fleet, then $$$ won't be much of an issue.

grecy
I think it's related to who Musk is, and the kinds of things he wants to achieve.

SpaceX isn't trying to just do what has always been done and just make it 5 or 10% cheaper. That's boring. SpaceX is completely turning the launch industry on it's head.

Tesla is the same. Elon isn't interested in making "just" an electric car. He wants to change the world, and electric cars that drive themselves will do that.

Can he do it soon? I'm not an expert, I don't know. But damn it's exciting to watch.

lutorm
The purpose of Tesla is to save the world from global warming, though. I'm not sure self-driving is that relevant compared to the electric propulsion part.
ginko
If anything it'd be detrimental since it would allow more people to drive longer and more often.
fossuser
Interesting pieces:

- First principles hardware design of focused self driving computer (many times better than any competing existing hardware). Already shipping in all newly produced cars. Currently working on next gen that will be 3x better to ship in a couple years.

- Lidar is an unnecessary mistake that competitors are making that won't succeed (too expensive, need too many, unnecessary).

- Real world fleet testing is critical to success, simulations are not good enough since there are too many unknown unknowns in the real world. Tesla uses simulations too, but nobody else comes close on real world fleet testing.

navigatesol
>many times better than any competing existing hardware

Except that it isn't. NVIDA's Drive AGX Pegasus delivers 320 TOPS:

https://www.marketwatch.com/story/nvidia-says-tesla-inaccura...

Of course, nobody inside the Musk Reality Distortion Field actually cares about this.

invisible
Not that I'm confident about either side of this, but the TOPS isn't the only important factor here. It's also the efficiency. If Tesla's CPU can do more ops/watt, then it's more ideal for the conditions that Tesla needs the CPU to fulfill. On top of that, I have no idea what "320 TOPS" means in the context of the specific workload Tesla has. Will they _actually_ get 320 TOPS?
grey-area
This software talk is fascinating and a great introduction to neural nets.
mhermher
Next iteration of FSD is going to be 3 times as full. Can't wait.
gamblor956
First principles hardware design of focused self driving computer

Musk has a long history of "first principals hardware design" that was useless in practice (see, e.g., Hyperloop, the Alien Dreadnought, West LA tunnel). I would prefer "second principals" hardware design where you go past the first principals they teach in the entry-level subject matter courses and address the actual real world requirements...something that Musk is incredibly poor at handling.

Lidar is an unnecessary mistake that competitors are making that won't succeed

Number of drivers killed by a LIDAR car missing a truck right in front of the car, or a freeway divider in front of the car, or a lake in front of the car: 0. Number of drivers killed by Autopilot: already into the double digits. So far, the evidence is strongly against LIDAR being the mistake.

Real world fleet testing is critical to success, simulations are not good enough since there are too many unknown unknowns in the real world. Tesla uses simulations too, but nobody else comes close on real world fleet testing.

Tesla running the same tests (i.e., driving routes) over and over again a thousand times is just wasted data. Waymo, etc., running the car in varied road, weather, and geographic conditions is far more varied and useful information. To put it in metaphorical terms: Tesla is just cramming for a test by memorizing the answers without understanding the underlying material. Waymo, et al are learning the material itself so that they can extend their learning to novel situations.

All in all, was not impressed. Marketing and/or Musk had too much influence over the content of the presentation.

Traster
You missed the most glorious part of

>Currently working on next gen that will be 3x better to ship in a couple years.

Someone asked what the primary design objective of the next generation chip would be and the engineer muttered 'safety' before Musk made the 3x claim.

DuskStar
Eh, I'm pretty fine with that. Imagine the current gen slightly beats human drivers - also known as "revolutionary, but not perfect" - of course the next gen would focus on improving safety further.
justapassenger
> Currently working on next gen that will be 3x better to ship in a couple years.

That's ... not great. If that was CPU type of unit, it'd be great. But TPU-type accelerators are growing at massive speed (as it's still pretty new and simple tech), where you're more looking for 10x type of gains.

dwighttk
"many times better than competitors"

"next gen that will be 3x better to ship in a couple years"

applying psudomathematical varnish to marketing-speak

DeonPenny
The talk was extremely technical
fossuser
I was summarizing - they gave explicit technical details in the talk.
dwighttk
Nvidia says they were comparing against an inferior chip https://www.marketwatch.com/story/nvidia-says-tesla-inaccura...
sytelus
> Lidar is an unnecessary mistake that competitors are making

This is very controversial and rest of the industry thinks exact opposite, many even claiming that Tesla is being irresponsible and even delusional in trying to do autonomy without lidar. The main points I have heard in favor of lidar are that computer vision is very flaky not only in suboptimal weather but even in good weather. Cameras are simply no where close to in performance in dynamic range, rapid adaptation, focusing etc as human eye. Imagine car going under the shaded road with rapid changes between bright light and shade. The likelihood that your depth estimation will get messed up is very high. Of course, night driving becomes highly questionable as well. In additional the long range depth estimation is very flaky with stereo vision right now and a topic of research for mono-vision. If you want to retreat to level-4 only and that too with conservative speed, weather etc then may be vision+radar more doable?

somethoughts
I think it also fundamentally depends on the business model being pursued:

- If you are trying to sell the Model 3 directly to the end consumer with autonomous mode, the extra $10K for Lidar and bulk (which will greatly effect the exterior design) are definitely non-starters.

- If you are going to robo-taxi route (such as Waymo and Uber), the extra one time $10K cost to add lidar for the 5 year life of the car is probably a blip on the income statement of the operator as compared to a full time human driver which probably costs $10K PER MONTH for the lifetime of the service. For the robo-taxi business model - its a bit of a no brainer - they could stick every sensor known to man on the car and still make out like a warlord by getting rid of the human driver but maintained a 100% safety record. Plus making the car stand-out with a unique Lidar inclusive shape is a great marketing differentiator. Also reduces your liability if a taxi rider ever sues since you can claim you have redundancy in the system.

tomComb
I don't think anyone wants to use Lidar - the claim is that they need it to achieve full, safe, autonomy. If it turns out that it isn't needed than Tesla will be in a great position.

But only time will tell.

bobsil1
- Only a matter of time (evolved AGI research) before vision suffices. We know it can work because of bio evolution.

- Also only a matter of time before lidar cheap enough that even Tesla will add for redundancy and edge cases.

fossuser
They pair it with a forward facing radar which is inexpensive and good at depth perception.

Elon predicts all competitors will eventually drop lidar. He mentions it's expensive, but also not as good in a lot of cases (and all roads/signs are designed for vision).

He argues that getting vision to work is a prerequisite for getting self-driving to work and once you have it working, lidar is worthless (and unnecessary).

sytelus
“In my view, it’s a crutch that will drive companies to a local maximum that they will find very hard to get out of,” Musk said. He added, “Perhaps I am wrong, and I will look like a fool. But I am quite certain that I am not.”

https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-s...

sangnoir
Their forward facing radar has failed to avoid crashes into big, stationary objects (at least 2 semi trailers, a fire truck, and more). The explanation I got was that radar is rather noisy, so it's handy to filter-out stationary objects (like road signs, broken down vehicles on the shoulder, and fatally, any any semi trailer crossing your lane)
gamblor956
Elon Musk is noted for his premature optimization.

Case in point: both the Gigafactory (vastly overbuilt for the quantity of batteries actually produced) and the Alien Dreadnought (vastly overbuilt for the number of cars Tesla current produces...assuming that Tesla is ever able to get the fancy automation working). Boring Co digging a two-mile tunnel in West LA without bothering to learn how to pour concrete smoothly, or to make the "rails" the proper width, or learning about ventilation, or access points....

Fricken
It's easier to optimize a working system than it is to get an optimized system working.
cromwellian
It's not easier to optimize a working system for consumers if "optimization" == "not dying". What will happen is you will deoptimize your brand's safety and find yourself regulated.
StavrosK
So when are they going to get them working?
Fricken
Beats me. It could be a while before we see anything that can compete with conventional rideshare at scale.
sytelus
Often reverse is true. Trying to optimize system with million lines of code may require serious change in architecture and much harder because of backward compatibility and all the legacy baggage. As many people would describe it, it much harder to fix an airplane that must also continue to stay in the air.
iandanforth
Tesla: "We have a global network of cameras that can be queried to find anything on or near roads and send back photos of it."

Every law enforcement entity on the planet: drool

xedeon
It was pointed out that All data is anonymized.
x2f10
It's trivial to identify the car. You can see the reflections of the plates at times. If that was removed, you'd still have the ability to cross-check it with traffic cameras (or business cameras).
salawat
Bull.

You can't say "You can query for anything!" and have it be anonymized.

In fact, if you're taking a camera shot, that's the most broadband sensor imaginable, and no thanks, I don't trust any company with profit motive not to crumble and pop out a software update to selectively disable picture blurrers, or to not have a Boolean flag in your data scrubber.

As a company, making that statement is just trying to slip one past and hope nobody asks inconvenient questions about it.

xedeon
Everyone is entitled to their own opinions. You can tell me that you believe the Earth is flat. I don't have to agree with it.
X6S1x6Okd1st
anonymizing data is non-trival.
iandanforth
That's fine. I don't care who's driving the car carrying the camera. I care about what the camera can see. If you've ever been sent an Amber alert for a local child abduction it often comes with a description of the car that the suspect is driving. If I'm in law enforcement I'd much rather get an emergency warrant to ask Tesla to query the network to find that kind of car in a given geographic area.
grey-area
Too late.

The future is here, and computers will see everything in excruciating detail via multiple planes of information. The power you consume, the data you send, the money you spend, the media you watch, the words you write, where you go.

It’s all out there in the world. There’s no taking it back, though we may be able to legislate about its use.

bluthru
If this could only be used for good it would be amazing. It could look out for lost pets, missing people, stolen bikes, suspects, etc.
localhost
At the tail end of Karpathy's presentation, he said something that reminded me of why Peter Norvig decided to join Google: because that's where the data is. In Tesla's case, they have a unique and likely accelerating advantage in having the best source of data up on which to train their models. I think this forms the basis of an enduring moat relative to their other competitors, none of which are collecting data at the same scale.
WhompingWindows
I wonder what the ratio of fleet-driving bytes collected is for Tesla vs Waymo vs the other big players...
justapassenger
Is Tesla collecting that data? I've looked at their earnings and didn't see any big expenses related to datacenters. Amount of data they would need to store and process is massive, that would for sure show up in earnings, as one of the biggest costs.
netinstructions
While you're waiting for the main event to start, here are some recent interviews with Elon about self-driving cars. He's very confident.

"To me right now, this seems 'game, set, and match,'" Musk said. "I could be wrong, but it appears to be the case that Tesla is vastly ahead of everyone."

I am eager to see what they unveil today.

https://www.youtube.com/watch?v=dEv99vxKjVI

https://ark-invest.com/research/podcast/elon-musk-podcast

cflewis
My guess is he means "on the highway". The scary bits of self-driving is person detection, crossing detection, roadwork detection, cyclist detection (e.g. coming up on the right when you are trying to make a right turn).

The Waymo end-game that I heard was "able to go through a drive-thru". I highly doubt Tesla is anywhere near that point.

jacquesm
> through a drive-thru

The kind of drive-thru that Tesla is currently associated with involves semis rather than fast food and it would be really nice to hear that they've at least licked that particular bug (and for good, this time).

dwighttk
what does "for good, this time" even mean with their regression issue?
jacquesm
That was exactly the point, the fact that such a thing could happen, be fixed and then happens again in something mission critical is very scary.
cr0sh
> The scary bits of self-driving is person detection, crossing detection, roadwork detection...

Your point it very astute.

Among a few other ML/AI MOOCs, I completed Udacity's "Self-Driving Car Engineer" nanodegree - so when I'm out driving, I often come upon situations where I wonder "how would a self-driving car navigate this?"

Today, driving in to work (note: USA), I noticed one intersection I've been through many times before, and that question came to mind. The intersection is interesting, because on approaching it, the road curves to the right, and you can actually see one of the traffic lights on the left before you even see the intersection. By the time you see the intersection, you're already on top of it.

So as you round the curve, you see the lone traffic signal (red/yellow/green); if it is red, do you start to brake, or do you wait until you can "see" more traffic signals? If you wait - will you have time to slow down and/or stop? ...and so forth.

This and others are all kind of "edge cases" that will need to be trained on, and/or perhaps other cues for self-driving vehicles installed or set up so the vehicles can navigate such areas successfully. I know when I first went through the intersection it was a bit of a surprise; it's not a very safe intersection (going home in the opposite direction is not any better; in that direction, you're headed downhill, have to cross the intersection, and immediately start turning to the left after going through - the curve is really abrupt, and you have protected/unprotected left-hand turns both directions, etc).

Xylakant
There have been news reports about the model 3 autopilot getting its speed limits from maps, lacking any sort of sign recognition or manual override to adjust to local conditions. The maps seem to be outdated for germany (1). That’s an essential feature even on the autobahn. Given that test result I’d even be skeptical about any claims of being ahead of the game on the highway.

(1) https://m.heise.de/autos/artikel/Test-Tesla-Model-3-4400919....

semi-extrinsic
This is very strange though, is there any confirmation of this?

Basically, most other manufacturers like Opel, Audi, Mercedes, Hyundai, VW, Volvo, Ford, etc. has had for several years the feature to detect speed limit from computer vision recognizing the road signs. And it works reliably, as is pointed out in your link.

How can Tesla be a leader in using computer vision for cars, but not be able to read the road signs?

eaurouge
Well they’re vastly ahead in one area: data collection. No other company is even close. You could argue about the quality of data but the platform is there and ever growing, and they can upgrade their hardware in the future and augment existing data.
nikofeyn
how can you claim that? more than waymo? that would be extremely doubtful. google has been driving around cars with sensors and cameras for over a decade.
DeonPenny
But they don't have the cars. Every car on the road that is a tesla sends back data.
danhak
They have a fleet of hundreds of thousands of cars driving real-world miles all over the developed world.
None
None
sangnoir
It's still impossible for an outsider to tell - Waymo logs every single vehicle-mile in their entire fleet, but Tesla samples from a larger pool.
stefan_
Don't kid yourself, the car has no bandwidth storage or performance to send back anything other than a few raw frames from disengage events or other rare triggers.
slg
It depends entirely on how they design the system. They don't necessarily need to send all the data from the cars back home when they can send test cases to cars, run the tests in a shadow mode to collect real world results, then send the test results back home.
grey-area
The presentation makes it clear your claim is entirely false, you should watch it.
stefan_
No, it's spot on. It's entirely what I said: the car can only deliver a few raw frames, and only in response to particular triggers.

Notice the cherry-picked examples in the presentation. There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.

SheinhardtWigCo
> There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.

Can you give an example? I'm curious what kind of triggers strictly require lab-calibrated hardware.

grey-area
They showed video in the presentation which was clearly not ‘a few frames’, unless by a few frames you mean seconds of video.
jvolkman
Which presentation did you watch? Karpathy said specifically "it's not a massive amount of data, it's just very well picked data" when talking about how the cars only send data when one of the configured triggers fires.
grey-area
There’s a large gap between ‘a few frames’ and a massive amount of data, and the amount sent lies somewhere in the middle. Clearly they can’t send all data (nor would they want to) but it seems it is sufficient for significant learning to take place and the examples shown were good quality over at least a few seconds, so hundreds of frames for each example.
toomuchtodo
Short video clips from all cameras are sent back to Tesla when associated with a disengagement event, queued for upload when the vehicle is on wifi.
saalweachter
I hope to god they are sending back short video clips randomly sampling all driving conditions, not just the disengagement events.
toomuchtodo
They are.
mirimir
I wonder if Tesla is getting subpoenaed for video clips.

Other than for accidents, the SEC investigation, etc.

toomuchtodo
My FOIA requests say no, but lots of blind spots. I'm not operating "at scale" due to the cost involved with non-electronic FOIA requests.
mirimir
Thanks.

I can imagine that police could mine this just like they're doing with Google geolocation data.

toomuchtodo
I hope Tesla has strong governance controls over customer data, and a fierce inside counsel for pushing back against unnecessary or overly broad LEO requests.
DeonPenny
Why would you say that? The car has LTE and connects to WIFI. It could easily send way more data than any care company at any time including over WIFI.
navigatesol
>It could easily send way more data than any care company at any time including over WIFI.

Except that it isn't, and even Karpathy said the quantity doesn't matter, it's the data quality.

DeonPenny
What they are sending way more data because from our knowledge GM and Ford are sending back 0 data and Waymo doesn't have half a million cars worth of data internally to pick from.
navigatesol
>from our knowledge GM and Ford are sending back 0 data

Yes, two of the autonomous vehicle leaders are not using any data whatsoever.

I thought this was the smartest forum on the internet?

11thEarlOfMar
And we don't pay for the LTE bandwidth. Tesla covers the cost of uploading the data.
tigershark
Maybe too much confident if you ask me..
netinstructions
I agree. The interview with MIT researcher Lex Fridman was difficult to watch because it didn't seem like they were on the same page at all - Lex asking thoughtful and pointed questions and Elon dismissing them as if the questions themselves are moot because self driving is right around the corner.

It was mind boggling. I am hoping Tesla can provide some specifics today because it seems Elon is living in a fantasy world (albeit one I'd like to live in if we can actually get safe self-driving cars).

LoSboccacc
I'm gonna say Elon is being extremely bold selling a technology that's current leader in deaths behind the autonomous wheels.

also I can't reconcile how the new hardware is this huge leap ahead beyond raw computing power if, by Tesla own claims, previous hardware was perfectly capable of autonomous driving.

seems people were getting fooled either now or before.

Hamuko
I hope Elon has tested the autopilot in Finland during the winter then.
dforrestwilson
Hmmm so then why is Tesla ranked last for autonomous driving by third party researchers?

https://www.google.com/url?sa=i&source=web&cd=&ved=2ahUKEwiR...

And Elon has a long history of making false claims about Tesla’s progress. For example in 2015 and 2016 he claimed that Teslas would be fully self-driving by 2018.

So why shouldn’t we be skeptical?

https://arstechnica.com/cars/2019/03/teslas-self-driving-str...

jerska
The first article you link to sources another article as its source, which itself calls bullshit on the ranking.

Your link:

> According to Electrek, Tesla trails behind other companies in terms of autonomous driving tech based on a list created by Navigant Research, an independent research firm.

Electrek’s article:

> Electrek’s Take

> I think Navigant’s autonomous leaderboard is ridiculous. There are way too many brands that keep most of their development under wraps, which makes it hard to evaluate them and therefore, it gives very little value to a leaderboard like this in my opinion.

dforrestwilson
What is your point?

Electrek is not exactly unbiased. It's literally called EV and Tesla news. Fanboi site's opinion should probably be taken with a grain of salt.

Here's the Navigant executive summary directly: https://www.navigantresearch.com/reports/navigant-research-l...

whamlastxmas
What other major manufacturer has anything close to Tesla's autopilot in a car I can buy today? As far as I know, no one.
leesec
Only really GM with Supercruise on one of their cars, the CT6. And it is not advanced as Autopilot.
ryanlol
Some big time source laundering going on in here, https://electrek.co/2019/04/19/tesla-falls-autonomous-drivin...

Your "third party research" is obviously bullshit, they go as far as including Apple in their ranking.

This right here is just typical worthless marketing press release spam from a management consultancy firm.

DeonPenny
But Navigant curriculum was very unscientific. There no actual quantitive reason Tesla is worse. Is was based mainly on business factors like go-to market strategy and vision.
navigatesol
As opposed to the "scientific", "quantitative" reasoning behind Tesla being the leaders in FSD?
DeonPenny
Yes absolutely saying Tesla who gets camera data from it's a half million car doesn't give it an advantage is crazy. That not even including the fact it's the only company who can do its strategy. Google would need to get constant data and GM and legacy automakers would need sensor suites on all it's cars yesterday.

No one knows if Tesla strategy will work because they don't have the data collection in place.

dforrestwilson
Neither does Tesla which makes it a moot point.

They have no way to store or transmit the massive data you are describing off the platform do they?

My understanding is that they have very limited storage and transmit onboard.

DeonPenny
Based on their talk today and Andrew previous talk where he shows explicitly tools that do just that download data constantly is exactly what they do. https://vimeo.com/274274744

I mean saying a phone can upload videos to youtube but a can can't to tesla is a weird ledge to stand on. Even their windshield wipers work based on sending video data to tesla to be learned on.

iandanforth
I really really don't want Tesla to die. I think it's an important company for a sustainable future. Failures in autonomous driving could easily turn into the straw that breaks the camels back.

If you call something "autopilot" and promote it as if it will drive for you and then it ends up killing dozens of people ... that's where successful class actions come from.

(I obviously don't want people to die either.)

leesec
dozens?

How about 2 thus far and Tesla was not at fault.

gambiting
And I don't want them to succeed because they are actively hostile towards 3rd party repairs and car modifications, plus their current model 3 microtransaction bullshit scares me as a customer, I'm dreading the moment other companies catch on with that. I'm specifically talking about the fact that the base model 3 ships with heated seats out of the factory but you can pay to have them unlocked with a software update. I suspect that having them enabled with a simple software mod would get you accused of piracy since you're using something you haven't paid a licence for(even though you've obviously paid for the hardware)
semi-extrinsic
> I'm specifically talking about the fact that the base model 3 ships with heated seats out of the factory but you can pay to have them unlocked with a software update.

If you think this is a new thing, you haven't followed the car industry much.

Just as an example, my 2002 el cheapo Peugot was bought without the option to show instantaneous MPG/average MPG/range/etc. Spend two hours soldering/gluing on the missing $2 toggle switch, ask a friend with the Peugeot Planet update tool to enable it in software, and Bob's your auntie.

It goes much further than that though, especially when you get into chip tuning. Software upgrades that add 50 horsepower to your engine output are commonplace.

But as for the "actively hostile to 3rd party repairs/mods", this part scares me to.

gambiting
I can sort of understand the software differentiation - I have no issue with Windows Home and Pro editions having different price points even thought the code on the disc is identical for instance(maybe that's hypocritical of me?). But I do have an issue with paying for hardware(heated seats in this case) and that hardware being disabled until a payment is made. I don't know, it just feels different.
plttn
You're putting the cart before the horse on the heated seat argument though. It's cheaper to have one factory line for the seats. Ideally you would pay the same price for the car without heated seats whether or not the hardware was physically there.

It's the same thing they ran into with the software locked batteries. The goal is to have minimal overhead to try to drive costs down on Model 3, and making the SR just a software limited version of SR+ actually ends up pushing the overall cost down, rather than having to set up an entirely different line that does the cloth seats and the aluminum roof.

jgibson
Think about it from a supply/configuration/logistics point of view. It might be cheaper to make 500k cars with 100% heated seats rather than 250k w/ heated and 250k w/o heated. Even if BOM cost is lower, the additional complexity due to the logistics and supply chain probably means it's more expensive to have both variants in production. That way people who want heated seats are subsidising the lower overall cost for those who don't. It also allows the next owner of the car to pay for that option at a later date if they want to. Almost every industry does this (there was a good case about oscilloscopes a while back, where the cheaper, lower bandwidth models just had a low-pass filter circuit installed).
agent008t
It makes sense economically, but still feels paradoxical. A similar example would be building houses with 3 bedrooms, where 2 bedrooms remain locked and inaccessible unless you pay for them. So a significant proportion of the population ends up living in 1-bed houses even though there is no scarcity of resource.
kitsunesoba
Personally speaking I would not want to share a road with an autonomous car that had been tinkered on by some random Joe. Autonomous farm vehicles or something, sure, but public roads are way too risky. This is not the desktop environment of your Linux laptop, it’s monstrously complex software controlling thousands of pounds of metal and batteries.
sytelus
I would expect autonomous cars to be have doubled down security against tinkering although I personally don't like that. The reason is purely public safety. If someone could re-program autonomous cars to slam into public crowds and places, imagine the havoc it can cause. A malicious actors can just walk in to parking garage, put in some wires and reprogram every vehicle they can get their hands on - imagine those scenarios.
gambiting
You can already do that with existing cars(pretty much anything that has power steering, so last 30 years of cars) and yet you don't see people hijacking vehicles left and right this way(or any other way, you can cut someone's brakes in 5 seconds and yet it's extremely rare). I think this is a made up problem.
sytelus
It's not about usual hijacking or rigging. It's about telling car to speed and slam into pedestrians if it sees large enough group of them, for example, and until that time it can just behave normally. Think of terrorists getting access to programmable robot that weighs over a ton, that can accelerate to 60 miles/hr under 5 seconds and it can identify objects in its surrounding.
gambiting
So.....a new Mercedes with driver assist package then? And Mercedes will not stop you from modding their cars to your heart's content, you don't have to ask them for permission to work on your car like you have to with a Tesla.
navigatesol
>Autonomous farm vehicles or something, sure

I knew I'd find someone here who hates when John Deere locks down their vehicles, but is fine when Elon Musk does.

mrfusion
How about a random joe tinkering with his brakes? My redneck neighbors replace their own brake pads.
gambiting
See, I don't agree at all. Just like all cars have to pass a pretty strict annual inspection to be on the road, I imagine autonomous cars will have to go through regular assessment too.

More specifically, I'm bothered by the fact that if you get into a crash in a Tesla, Tesla can disable your car and stop it from activating unless you do repairs at their approved dealership. You can't just buy parts from a scrapyard and get it running again, Tesla is the gatekeeper and they don't let anyone else hold the keys(they don't even release service manuals unless absolutely required by law). In contrast I could buy a brand new Mercedes CLS with its very respectable smart cruise control and Mercedes can't do anything to stop me from replacing the engine, the head unit, or fixing the whole thing if it's totalled - they just can't. It's not a freedom I'm willing to lose by buying a Tesla.

mlindner
We don't have open source implantable medical devices either.
cmsonger
This focus on the hardware is silly. Assume for a second that their new hardware is 50x faster than their last hardware.

That does not mean that their cars can self drive today.

That does not mean that their cars can self drive three years from now.

It's 100% not proven or obvious how car self driving skill and car self driving error rates scale with compute -- but it's surely not linear.

mattrp
I’m wondering what new spinoff comes from the hardware... does it have benefits / application to anything in addition to AD?
Ardon
There's another one of these coming about the software specifically.
DeonPenny
What if they proved it by improving hardware from self-driving hardware 1 to 2. If they proved that and proved it made their NN better then it makes sense. Of course, it wouldn't to you cause you have no inside knowledge but it may make perfect sense to them based on their data.
VikingCoder
Musk: "You're only going real fast in the forward direction."

Dozens of times in my life, I have been driving down the freeway at 70 MPH in a 60 MPH zone, and I've noticed someone weaving through traffic behind me, going closer to 140 MPH.

I need to know whether to change lanes, stay in my lane, stop changing lanes, pump my breaks to indicate there is slowdown ahead of me that car might not see, etc.

Just food for thought.

EDIT: I'd also like my vehicle to be good at avoiding a car that's about to T-Bone me, at night, with no headlights on. I may not be very good at avoiding that kind of accident today, but if LIDAR is necessary to protect me from that kind of accident, then I might think it's a wonderful idea.

sixQuarks
I’ve been driving for nearly 30 years and have never even seen one person going 140mph, yet alone dozens
1123581321
The person who wrote that doesn’t realize how fast 140 would seem. That kind of speed differential is not something you can attempt to avoid. Even 100 vs. 70 is rare, and the safest thing to do given that differential is nothing. This is not an area any car company should worry about.
VikingCoder
I agree that the safest thing to do is nothing. Which is why if I'm about to change lanes, and I spot this kind of behavior coming up rapidly from behind me, I abort my lane change. Even if I was moving to the Right lane (you never know which way these jerks are going to go.)

It is absolutely something a car company should worry about. Avoiding obstacles from any direction.

If a power line is falling down. If a construction crane is swinging wildly.

It's Defensive Driving.

Also, note, I said "closer to 140 MPH". I actually just saw another one on Sunday. Crotch Rocket motorcycle, in the far right line on a 5 lane freeway. No helmet. Hey, someone's got to be an organ donor, right?

1123581321
I see. I would also consider an aborted lane change doing nothing. However, I wouldn’t try to jerk back into my lane if a car was approaching me with that kind of differential since a) they don’t have time to react and may have been planning to go into my origin lane, for all I know, and b) by the time such a fast car is spotted, they would likely pass my car too quickly for any lateral movement to leave their trajectory.

Your other examples would be happening in front of the car.

VikingCoder
> I would also consider an aborted lane change doing nothing.

Aborting a lane change before you initiate it is changing your behavior, in response to a danger.

> Your other examples would be happening in front of the car.

Or from the side.

Or from behind, again. If I'm in stop-and-go, danger can come from behind. Picture an ambulance coming up from behind. You absolutely should change your behavior, depending on conditions all around you. Not just the front.

gpm
You're not responsible for dodging the person going 140 MPH behind you to the same extent as you are for dodging the tire doing 0mph in front of you. Humans generally don't because they aren't capable of paying enough attention to what is behind and in front of them simultaneously. The car - with 360 degree vision - is more capable in fact.
VikingCoder
Even if I'm not responsible, it's Defensive Driving. It can save my life to be aware of the dangerous situations that other drivers are causing.
dougmwne
Those are some very bold claims. Level 5 by the end of this year? I find the software approach intriguing and Karpathy's segment was enlightening and did a lot to convince me of Tesla's advantages.

On the other hand, I've been sensing that Tesla is finding it harder and harder to raise cash and has been getting increasingly desperate. Are we on the cusp of a new transformative technology or the peak of the mother of all bubbles? Time will tell.

hellllllllooo
Option 1: They have solved a problem so hard that no one else, even Waymo, is claiming to be close even with $100k+ worth of sensors and a lot more compute and they have also quietly solved multiple hard unsolved research problems.

Option 2: He's lying to get money.

Animats
Here's the new self-driving demo video from today.[1] From Tesla's HQ in Palo Alto, out to I-280, down one exit, use interchange at Sand Hill to turn around, come back. No visible conflicting traffic on non-freeway streets.

Compare the 2016 demo video.[2] That's a tougher route. That's the one where we now know it took a lot of tries to get a clean video.

Waymo and Cruise have put up videos of their cars in city traffic. They get criticized for things like getting stuck behind double-parked cars, and being a bit shy of parked cars that project into a traffic lane. But they get where they are going. Tesla is not showing anything near that level.

Supposedly the analysts at the meeting got to ride in a self-driving car. Anyone seen reports from them?

[1] https://www.youtube.com/watch?v=tlThdr3O5Qo [2] https://www.youtube.com/watch?v=eAal0juXXzU

DarmokJalad1701
https://twitter.com/hamids/status/1120466861970399232

These guys also went on the ride-along: https://youtu.be/2BZHXh1nbWc?t=360

Animats
That's helpful.

Second video says the driver "touched the steering wheel once" in 15 minutes of driving. That's a disconnect. So, one disconnect in, what, 10 miles of driving? Waymo reported one disconnect per 1,392 miles to CA DMV last year.

Amusingly, the riders were not allowed to take video.

zaroth
Actually it's one per 11,017 miles! But be careful quoting the number. It is not all disengagements or all times the autopilot was turned off after being turned on.

It is "deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle."

Hamuko
Feels like they're just throwing a bunch of hardware specs at investors to distract from any discussions about the current state of the software beyond "just need to improve it now".
sharadov
You cannot access the stream from Tesla anymore, I found it here - https://www.youtube.com/watch?v=tbgtGQIygZQ Karpathy did a great job explaining this, I took Neural Networks a long time back and his back to basics approach refreshed a lot of concepts I'd forgotten!
Sendotsh
The video has been taken down. I can't find it anywhere now.
personlurking
It's no longer in that link. I found it here: https://www.youtube.com/watch?v=tbgtGQIygZQ
wingworks
Is that the full stream though, it seems very perfectly set at 2 hours, like it's the last 2 hours... idk, seems to jump in half way.
pepijndevos
Currently just showing some generic car footage. Apparently they will demo their self-drivin features: https://www.teslarati.com/tesla-self-driving-autonomy-day-wh...
tigershark
Yes, I was feeling dumb watching the same thing in a loop, hopefully they’ll start soon with the real meat...
grey-area
“Anyone using Lidar is doomed”

Strong words from Musk about sticking with video only.

Later on in the software talk:

“Lidar is really a shortcut which sidesteps the fundamental problems...and gives us a false sense of progress”

robterrell
I wonder if he's going to explain this position at all? Or if it's just posturing against Waymo et al. Seems like more data is always better than less data for this application?
DeonPenny
Because lidar is expensive, error-prone, and can't be created without flaws. So if you make a car LIdar will fail way before camera. The only problem is if camera only can work
cr0sh
All sensors and the systems they operate with have flaws. There is no way around this - it's just the real world. Even cameras will have flaws, or they can develop them over time (lens degradation, image sensor dropping pixels or gaining stuck pixels, etc).

That is point behind what is known as "probabilistic robotics" - the real world has noise, and you need to be able to deal with it. Don't expect perfect sensors, don't expect a perfect environment.

Self-driving vehicles are the application of probabilistic robotic principles to a real-world task - quite possibly one of the most difficult tasks for the field. To be quite honest, it's amazing how well it's worked in such a short period of time.

I think cameras and machine vision approaches will be needed for self-driving vehicles to be fully successful, but I wouldn't say that LIDAR should be counted out. It will probably be necessary - even required - to have all of the sensors currently being used, not just a subset.

One kind of sensor that hasn't been explored much that I think will be needed is some kind of audio input; some kind of wide-range microphone (probably binaural or stereo) to take in environmental audio and use that for driving cues. Some simple examples might be for vehicle horns, or the squeal of tires, or the revving up of an engine indicating someone is speeding up aggressively, or an emergency siren, etc.

There may even be other sensors needed or that could provide other data to fill in certain gaps of environment knowledge to help a self-driving vehicle navigate. I don't think any one or another should be discounted.

grey-area
Cost is also a significant consideration on mass market cars.
sangnoir
Why not both? Infact, Waymo has Lidar, radar and cameras (Tesla have just the last 2 and are working overtime to demonize the first). More data (and more redundancy) is better - assuming you have the computing power to process all the sensor input. As far as I can tell, the rubbishing of Lidar is FUD.
DeonPenny
Because if you're using lidar for depth and you try to mass produce the there a high chance the lidar will be broken and because it accurate people will assume it more correctly. Eyes and camera may sometime make mistake with distance but usually it will be software not hardware. Lidar opens the issue for both. So it's easier to use the camera and just try to make a camera as good as lidar.
Hamuko
>I wonder if he's going to explain this position at all?

Doubt it.

hellllllllooo
Yeah it's nonsense. It's an after the fact reframing of a decision that was made for other reasons. Lidar + camera is better for this kind of problem. It's because current lidar are too expensive and too fragile (1-2year life) to put in a production car, not because they aren't much better. If this wasn't the case he'd definitely be using them. Elon cannot use a lidar in a Tesla even if he wants too so he's coming up with some FUD to dismiss any completely reasonable questions in advance.
grey-area
They explain it in the software talk. In short, I think his thesis is:

Lidar is expensive, and gives surprisingly limited data (just points of depth) and is like a crutch that will only get so far. It will not get you to full autonomy so why not spend the time and money on vision which will (as proven by humans). They also do use radar and ultrasonics (not visual spectrum).

At some point your machine must be good enough via machine learning to give a very good impression of understanding intention in other road users, pedestrians etc. One example they gave was a distracted pedestrian with a phone. Lidar tells you nothing save an obstacle is on the pavement, it won’t tell you they might step out without looking, but machine learning on a massive dataset can.

gamblor956
One thing LIDAR can do that Tesla's ML vision software cannot is identify freeway dividers and trucks turning in front of the car.

Once Tesla can actually handle those two extremely basic tasks maybe they can start talking about how vision is better than LIDAR. Until then, it's just more hot air, and that's not even taking into account Tesla's claims of using ML to predict the actions of uncontrolled independent agents in the field of view.

DeonPenny
They explained that in the talk to
cromwellian
Seems to me that relying too much on Machine Learning for this is actually the real risk, not LIDAR. Tesla's competitors are using ML, but combining it with sensor fused data from LIDAR, Radar, accurate maps, and visual data. In other words, they're starting from a view of the world with super-human sensing.

Starting from a purely visual domain likely dooms your system to making the same types of mistakes that humans make visually in judgement. At least with accurate maps and LIDAR as backup, you can sanity check output of your visual processing against a map and LIDAR. If your claim is that the map might be out of date, or the LIDAR too low rez, it still helps to err on the size of caution.

In the worst case, the map tells you you can't go somewhere that you're allowed to go. In the case where it says you are allowed to go somewhere were you shouldn't, well your visual system should be telling you not to go there. If it isn't, you're in a lot worse trouble than having a map with flaws.

Likewise, the claims that "LIDAR is expensive" is like claiming EVs are expensive because "batteries are expensive!" If AVs become common, then they'll be a huge demand for LIDAR and the costs will decline. LIDAR costs are already declining. Maybe if Elon took some of Tesla's miracle engineers and had them make a LIDAR, they could not only defeat Nvidia at chip making, but defeat all LIDAR makers as well. (sarcasm) My guess is the real problem with LIDAR is drag and vehicle trim/styling.

At this point, Tesla is basically stuck. They bet big early on a shitty sensor suite before AV technology had been worked out, and wave their hands about how using consumers as guinea pigs to feed them fleet data will magically fix deficiencies in sensors. Well, what if this is a false hope and it doesn't? It would mean they'd get sued by everyone who bought the AV suite as an option and have to issue refunds or recalls to upgrade.

I've said it before and I'll say it again, you don't ship AV as an MVP on a $100k vehicle and promise magic upgrades and fixes later before the technology is even close. It's risking people's lives and it's already killed people.

derpherp
I agree with all your points. However, Musk did mention that his team at SpaceX built their own LIDAR for docking with ISS. I think Tesla can custom make their own LIDAR too.

And I would secretly look forward to. I was part of a self-driving team at university and we had to raise funds especially to be able to afford a Velodyne HDL-32.

andbberger
Indeed, Tesla is on the wrong side of history in sticking to their guns on pure vision autonomy.

And they're going to kill people.

It's hard enough already to communicate to the public in any meaningful sense ML results without #SKYNET brigading, Tesla is going to make it 100x harder dragging us all through their mud.

I think at this point (of having been a matrix boi long enough) I am allowed to publicly state my belief that we are very very very far away from human-level visual perception. Just staggeringly, incomprehensibly far. We've just barely started to be able to do the most basic things, sometimes.

Is it an incredible amount of collective dunning-kruger that is blinding tesla? Or perhaps willful ignorance?

VikingCoder
Q: "What's the primary design objective of the next-gen chip?"

Mumbled Answer: "Safety."

...doesn't that mean the current-gen chip... isn't as safe as you want?

djtriptych
I don't see this line of reasoning. To me "safer than humans" or "in the 90th percentile among drivers" is a fair go to market line.

Optimally safe should still be the goal. That is, no human could have provided alternative input to the computer that would have created a safer outcome.

VikingCoder
I think you just agreed with me. Please reconsider what I've said and what you've said.
ryanlol
Are any cars ever as safe as you want?
VikingCoder
Society has decided how to judge whether I am safe to drive, and I have passed the tests.

I worry that the laws on the books are insufficient to judge whether a computer is safe to drive. If Tesla already has plans to increase safety, I'd like to know how they judge it, where they think they are, where they'd like to improve, etc.

As a voter.

stevep98
I think there is a lot of scope for an autonomous vehicle to be safer than a human driver. Consider:

1) computer has faster reaction times.

2) 360-degree awareness

3) better awareness of vehicle performance.

Imagine a scenario where you are at speed and get a tire blowout. The car can sense it instantly through tire pressure sensor, and counter the swerve appropriately - something that humans are very poorly prepared or practiced for.

Alternatively, a driver in a neighboring lane makes an unsafe lane change into your lane. The car sees him, honks the horn, moves/brakes/accelerates to avoid the other car, before you've even seen whats happening.

zaroth
No, it just means the next-gen design had safety as a primary design objective. Secondary objectives could be additional modes of self driving like traffic light detection and managing intersections.

Volvo leadership states the exact same thing about their car designs. The primary motivation “in everything they do” is safety.

The statement can be made entirely distinct from the current safety level.

breatheoften
Man -- his engineers must cringe quite a bit when he talks about a future self-driving architecture built on DOJO with 'video-in, actuator controls out' ... just a tiny bit of speculation goes into saying something like that ... To my mind, it undermines the presentation quite a bit which otherwise appears to mainly argue that they will reach full self-driving by iterating on their neural network sensor perception model.
DeonPenny
Nope his engineers have talked about this long before he did. More than likely they convinced him not the other way around.
wcoenen
Andrej Karpathy (who was one of the presenters today) actually made similar statements last year, about how the part of the software stack that is not neural networks is gradually shrinking: https://youtu.be/y57wwucbXR8?t=352
tim333
Some video of demo rides from someone who was at the day:

Lane change on crowded freeway: https://twitter.com/hamids/status/1120472369762590722 - It manages it if not quite as smooth as a human.

Summon in parking lot: https://twitter.com/hamids/status/1120446405410205701 likewise.

There's also an official Tesla sped up demo vid https://www.youtube.com/watch?v=tlThdr3O5Qo - hard to tell really.

Same video slowed with the display magnified and some Reddit discussion https://www.reddit.com/r/teslamotors/comments/bgb9or/fsd_dem...

fizzledbits
I think it’s a safe assumption that driver alertness positively correlates with lives saved. But it’s hard to market this product with the sort of rhetoric that would optimize for that, so they’ve found a few ways to console themselves as they name the feature “autopilot” and endeavor to make the tech appear magical.

If they think they’ve cleared the bar just by beating the crash statistics of all cars, then they are still in the business of trading lives.

henvic
Here is the link to the recording: https://www.youtube.com/watch?v=Ucp0TTmvqOE
oska
Now unavailable.
IOT_Apprentice
That video link shows as unavailable.
henvic
Yeah, I saw that but somehow I'm still watching it in another tab.
filleokus
What kind of event is this and who is it targeted to? Right now there’s a comparison of energy consumption for different instructions on processors, I’m very confused.
adanto6840
Surprisingly technical versus what I'd expected, but it is targeted specifically at investors.
whitepoplar
The technical angle seems like a very smart one for this talk. You can just smell the cowering insecurity of the analysts and their deference to Musk when the technical descriptions whoosh past them. They may be (rightly) skeptical, but they don't want to look stupid while being so.
andr
To summarize, it appears that their new chips can do 96x96 dot product in a single cycle (multiplying neural network weights by values), and have hardware for activations (ReLU, softmax, tanh) and pooling (as in convolutional neural networks). This results in a crazy 144 trillions ops/second.

How does this compare to TPUs and the Neural Engine in iPhone CPUs?

LoSboccacc
something I don't understand, is this chip brand new? is it essential to the autonomous drive?
DeonPenny
Chip is new and is needed for autonomous driving
Traster
Can they actually do a 96x96 dot product in a single cycle? My understanding was that they can do 1 96x96 dot product per cycle - that is to say that their throughput is a 96x96 dot product but it'll be heavily pipelined.
jes
Is there a link to a replay of the live stream? The link on the Tesla IR site just takes me to their main page.
tim333
https://www.youtube.com/watch?v=tbgtGQIygZQ
stevep98
It was on Tesla's youtube channel up to about 2:30pm, then it just went 'unavailable' as I was watching it, and is now unavailable.
torpfactory
It's interesting to contrast Musk's stated aversion to LIDAR with his previous statements about new technologies always starting at the high end and working their way down [1]. Tesla appears to be trying to start at the low end (HW is included with the purchase of a car even if you decide not to opt for the software to make it function. You might imagine that you would start with all the bells and whistles and pare them down as the technology matures and you invent ways to do without.

[1]https://twitter.com/elonmusk/status/1075126514851602432

WheelsAtLarge
How can Musk make the prediction of a million autonomous taxis in less than two years? That's just fantasy. There are just too many unknowns to solve. One of the big ones is permission from the city governments. Once a car hits a pedestrian or multiple taxies start having accidents, governments will bring the project to a halt. Tesla might have a small test fleet somewhere but not 1 million taxis.

My guess is that Tesla will need another round of financing along the way and they have to make sure that the stock's price stays up. Musk is a great promoter but at some point he needs to come back to reality.

donmatito
I wonder if they are going to address potential limitations to their systems (bad weather, snow etc). Their tech seems absolutely impressive but I suppose there will always be failure. Does the system fail "gracefully" ?
tim333
New link: https://www.youtube.com/watch?v=tbgtGQIygZQ

(replay after the livestream ended)

ham_sandwich
It seems like Tesla is at a pivotal moment.

On one hand there are hyper-bulls who claim Tesla is a $4000 stock and the future of transportation. On the other, hyper-bears claim the equity should trade around $0-$10. There seems to be no middle ground.

It seems like they are almost betting the company on FSD. I don’t think FSD is really even close to a possibility over the next 5-10yrs. I hope I’m wrong, but if I’m right, I don’t see how Tesla keeps going on like this.

Traster
>On one hand there are hyper-bulls who claim Tesla is a $4000 stock and the future of transportation. On the other, hyper-bears claim the equity should trade around $0-$10. There seems to be no middle ground.

Well the middle ground is the actual stock market where Tesla is trading around $265.

cmsonger
> It seems like they are almost betting the company on FSD.

IMO they don't have any choice. The more and longer they operate like a car company shipping bigger and bigger volumes, the more their financials will be undeniably trend towards those of the existing car companies and the more their existing market valuation will be hard to justify.

They need something like this to not get traded at traditional car industry multiples.

bryanlarsen
If you just measure based on Tesla's last two quarters, their P/E is just 28. That's high for a car company but pretty low for tech. Self-driving is not necessary to maintain that valuation, only continued growth of the electric car market in general.
cmsonger
> That's high for a car company but pretty low for tech.

Yes! Totally agree.

> Self-driving is not necessary to maintain that valuation, only continued growth of the electric car market in general.

Disagree. The key properties of the car business are high capital costs and high variable costs and not huge margins. The key property of the tech business is low variable cost and often low capital costs (but not always) and high margins.

There's nothing "tech" about building electric cars vs. normal cars and 10 years from now the margins and capital expenses of electric car business will be like the existing ICE car business.

So this is Tesla saying: "Yeah, our financials are starting to look like a normal car company, but we've got this thing that you should keep value us even more like a tech company than you do today."

bryanlarsen
The market values Tesla like a car business that is growing ~50% per annum.

> 10 years from now the margins and capital expenses of electric car business will be like the existing ICE car business.

Tesla already has margins similar to existing car companies.

> So this is Tesla saying: "Yeah, our financials are starting to look like a normal car company, but we've got this thing that you should keep value us even more like a tech company than you do today."

Yes, they're saying that, but the market is obviously not buying it.

sytelus
I think FSD would be cherry on the top. If they can continue slashing down the prices and improve battery life it would be revolution in itself. Even just continuing with current pace if they can produce $18K electric car with 600 miles range in next 3-5 years, you could be golden as shareholder. Their advantage, like Apple, is attention to details and awesome design that other car manufacturers have failed to replicate.
dragonwriter
> On one hand there are hyper-bulls who claim Tesla is a $4000 stock and the future of transportation. On the other, hyper-bears claim the equity should trade around $0-$10. There seems to be no middle ground.

Well, other than actual investors.

tdhttt
I have several questions about Karpathy's presentation: 1. How to label automatically using fleet learning? An example he gave is detecting cut-ins. However, I am wondering if it could be applied to more general and basic cases, for example, labeling lane marking/vehicles. 2. How do you know if someone is a good driver (which is essential for imitation learning)?
bg24
I read through all comments. Very informative.

How many people and companies are going to freak out if Elon and his team deliver FSD without LIDAR? Say in 2022, not even 2020. My layman logic is that AI is improving exponentially, and we do not know what artificial vision can do in 2 years.

strainer
I would like to ask them about their choice of running two processors to be able to respond to hard or soft failures in either ones execution. Three processors seem much more desirable for this since we could identify which is the errant one even if the error is of an unknown kind.
johnvega
This is the most exciting technology (software & hardware) and business event in a long time.
rick22
They use the batch size as 1. That is surprising to me as all the online course i see the batch size b/w 32 and 256.
nopriorarrests
Elon just promised Level 5 FSD this year, by the end of it.
rootusrootus
If by this year he means 2050, then I'd still be kinda surprised.

I just spent the weekend racking up several hundred miles in a brand new Model P3D. Phenomenal performance, I am completely sold on the future of EV. I was underwhelmed by autopilot however. Drives like a drunk. The adaptive cruise control part actually made me motion sick. I had high hopes going in, so it was a pretty big let down.

I suspect Tesla is going to find themselves on the other end of a class action lawsuit over all the FSD upgrades they've been selling.

strainer
These engineers seem very uncomfortable on stage, maybe late start circumstances has thrown them off kilter. Ive been dying for them to relax a little.
JoshTko
I wish someone will ask what gaps remain for full self driving? It seems like Elon is saying that it just a data collection issue at this point.
mellow-lake-day
Absolutely great talk! Especially impressive how big the fleet is and how well they can use the data they have.
godelski
15 minutes in and it is still just showing stock footage. This thing was supposed to start at 11am PST, right?
typo_hunter
11am Pacific Standard Tesla Time, the conversion to PST is iffy
godelski
Looks like it is ~+40 mins
Animats
The Verge says "The event was slated to start at 2PM ET (but is running a little late)". Still showing car commercial type clips.
martythemaniak
I feel Tesla livestreams are very much like their cars - they never launch on time, but when they eventually do, they're pretty good and worth the wait.
paul_42
is anyone knowledgeable with how this compares to Waymo's solutions? I'm curious.
jefft255
They are fundamentally different: to summarize, Waymo follows the HD maps approach when you have a very precise map of a given city, with both metric (actual 3D shape of the environment) and semantic (lanes, sidewalks, signs) information, in which you localize yourself with centimeter-level accuracy (with GPS, SLAM etc). When you're localized you can combine information that you see now such as cars around you and high quality information coming from the map.

Tesla, on the other hand, thinks this is too fragile to changing environments and works with regular (think google maps level of detail, probably a little better) maps, combined with local semantic information such as signs and lane markings. This also means that they put less pressure on localization, because they don't use the map to detect i.e. speed limit (I'm pretty sure Waymo can detect those signs, it's just that the existence and position of the sign is already known in a global world frame).

kapravel
link to the livestream video on youtube: https://www.youtube.com/watch?v=Ucp0TTmvqOE&t=4150s
agumonkey
I wonder how commaai is taking the news. Fleet + imitation was their angle too.
sackroyd
Uber and Lyft are fucked.
joetribiani
How so?
gpm
If Tesla succeeds at FSD - Tesla is competing against them with hardware that operates at (using the numbers from the presentation, if I recall them correctly) ~1/10th the price.

They have contracts in place with the owners of cars they have sold preventing them from letting Uber and Lyft use the self driving capability to compete.

DeonPenny
Not to mention they have the apple vs android advantage where since they build their own hardware optimization is far easier than google partnering solution.
eastendguy
Ok, I am late to the show. Where can I watch the recording?
tim333
https://www.youtube.com/watch?v=tbgtGQIygZQ
sahin-boydas
It is pretty techincal. Wow. Great. Thank you Tesla.
jak92
How much b-roll do they have?
godelski
It loops
sea-shore
Maybe it is a test to see when the AI generating the video breaks
kempbellt
Did that roadster just pass two model 3s on a double yellow blind turn? Smh.... but that was awesome
tomjakubowski
hopefully that wasn't an autopilot demo
_eht
(more like a non-stop ad of someone taking medium speed turns on a scenic byway... nice try tho)
microdrum
Stream just went offline. Anyone have a link?
microdrum
Here it is: https://www.youtube.com/watch?v=Ucp0TTmvqOE
Chico11Kidlet
The only thing that makes sense is that Elon is a time traveler from the future. Nothing else makes sense. He knows what is going to happen. That's how he speaks so confidently about LIDAR's uselessness.
schintan
doesn't portend well the way it is going, looks like they don't have anything to show here.
secfirstmd
Musk: "Lidar is a fools errand"

Let's hope he is right.

swagasaurus-rex
"It's like having two appendices. One appendix is bad"
dwighttk
I don't know the context for this, but one appendix isn't bad... appendicitis is bad.
mynameisvlad
He means the appendices in a book.
dwighttk
I just watched the 5minute distillation and I’m pretty sure he was talking about the organ.
danhak
"Lidar is a fool's errand...you'll see." God I love Elon.
sairahul82
Karpathy's presentation is really good. Watch it later if you get some time. The key points are

- Telsa is using fleet for learning (Fleet learning). This has 3 components, a. Trigger infrasture that collects the kind of data telsa is looking for training. The rational here is we don't need massive amounts of same kind of data but needs right kind of data for training neural networks correctly. b. Data Engine which learns from these examples. c. Shadow mode where they deploy the learned model and test how its doing in real world and iterate this process.

- Drivers are themselves acting as labelers and tesla is in a unique position to take advantage of this.

- People drive vision only. Visual recognition is essential for autonomy. Lidar has much less information than vision, for example to identify the thing on the road is a plastic bag, lidar provides few points. But vision gives more data for telling this.

slg
>Karpathy's presentation is really good. Watch it later if you get some time.

I would second this recommendation. He did a really good job not only explaining Tesla's approach, but also breaking down the concepts in an easy to understand way for laypeople. That is a rare skill to have especially among someone who is obviously so technically accomplished.

chibg10
Based on what I know about how high-level executive meetings and presentations (which is the role Karpathy is playing here) go, there's a pretty good chance the presentation was not actually developed by Karpathy but a Tesla team of significant size.
Animats
It was significant that the brief bits of video showed the system identifying "driveable" areas, shown in blue. That's important. The first step is to conservatively mark driveable areas, where you can physically drive. Then you get to plan where to go.

Amusingly, if you wanted to train a vision system to recognize driveable areas, the straightforward approach would be to use a LIDAR system to measure road flatness. Watching the road go by on video as a human driver drives is not very useful for this. That just tells you about the part of the road a human driver chose to use, not about the road geometry itself. The driver probably won't use the road shoulders, but the drivable area system needs to evaluate them.

Recognizing drivable areas is what keeps you from hitting solid objects. You don't have to classify them, just note that they're not flat road.

b_tterc_p
> The rational here is we don't need massive amounts of same kind of data but needs right kind of data for training neural networks correctly.

This is really important. At a certain point additional miles on similar roads in nice conditions with fair traffic won’t help anymore. The data variety is really important.

millettjon
They also probably want to learn from the best human drivers and not repeat the mistakes of the average.
krzkaczor
What I found interesting as well is that they (him and Elon) hated the idea of using high definition maps, saying that if you need them you don't do self-driving cars really. For example, when driving condition change (roadblock or anything) you're done.
zaroth
That is very interesting. High definition maps perhaps not, but I do hope maps of some kind are an integral part of the logic.

For example on a country road near my house, the road veers left while a side street continues perfectly straight. The current auto-pilot has a very hard time staying on the main road and almost always disengages before putting enough left-steer to stay in lane.

A human who didn’t know about the road (no knowledge of the left bend ahead), and who missed the big yellow “left curved arrow” sign, would likewise probably end up going straight onto the side street.

This is a hierarchy of stateful awareness which I believe is essential for successful self-driving — the only way to increase NN weighting to the point where the car will convincingly execute the left bend to stay on the main road is some awareness that a left bend is upcoming. This is a case where you can’t drive the road by staring at the lines. And there are many, many others.

Stratoscope
> when driving condition change (roadblock or anything) you're done.

If you live near Palo Alto or Mountain View, you probably see a few Waymo cars every day. But I have not seen a single one try to navigate the new interchange under construction at Willow and 101.

RivieraKid
If this is true, why do Waymo (or any of the top 10 self-driving companies besides Tesla) use Lidar?

Two facts:

1. Tesla can't use Lidar because of cost and aesthetics.

2. Elon has a strong incentive to oversell their approach to self-driving.

trixie_
From the presentation they mention that Lidar is good at developing tech demos quickly, but they've found it is insufficient in reality. Which of those 10 companies you reference have moved far beyond tech demos?
foobiekr
I would suggest that the ultimate quick-to-impressive-demo technology of all is deep learning and that as of yet it has proven insufficient.
RivieraKid
Waymo operates the first autonomous taxi service in Phoenix with plans to expand geographically, no info about the others.
natch
Limited routes, geofenced, not truly autonomous, mows down pedestrians even with a supervisory driver.
dyarosla
Source for the last two claims?
natch
Can't edit now, but I was mistaken, that was an Uber car.

The Waymo robotaxi cars are supervised. Source: https://www.apnews.com/09894dee68d7496399f176a77a8bc98d

slg
Phoenix basically provides perfect conditions for self driving cars. There is very little precipitation. It gets more daylight and sunlight than any other major city in the US. There is little to no vehicle endangering wildlife. The city was mostly designed post-cars meaning there are lots of big newish straight roads and there isn't much pedestrian culture. To put it simply there is a lot of "reality" that exists in other markets that isn't very present in Phoenix.

I'm not saying Waymo can't handle it, but there is a pretty big difference between being able to handle Phoenix and its suburbs compared to Boston and its suburbs for example.

rayiner
Note that currently Tesla’s “autopilot” doesn’t even beat Toyota’s and Subaru’s adaptive cruise control: https://www.caranddriver.com/features/a24511826/safety-featu...

You can’t even mention Tesla in the same breath as Waymo.

MPSimmons
My Tesla has gotten immensely better in the past few months. The release of the version 9 (https://www.tesla.com/support/software-v9) was revolutionary. I would love to see the same test repeated with current software (and then again, once AP HW3 is released)
slg
That article is judging AEB which is a safety feature that comes free on Teslas and isn't a part of the Autopilot system. It would obviously instill more confidence if Teslas performed better, but this isn't a measure of the company's Autopilot or FSD capabilities since it is entirely possible that AEB is a completely independent system.
rayiner
AEB is allegedly built on the same vision system: https://electrek.co/2017/04/25/tesla-automatic-emergency-bra...

> But the system still wasn’t at feature parity with the first generation Autopilot, primarily because of the lack of Automatic Emergency Braking (AEB) feature. Electrek has now learned that the important feature is going to be released this week.

> As we reported during the release of the 8.1 update, AEB has been a high priority feature for Tesla, but due to the nature of the safety feature, the company needs to make sure to get the false positive rate as low as possible in order to implement it safely.

> Unlike with the first generation Autopilot, Tesla has to build the feature from the ground up using its own ‘Tesla Vision’ technology and the new sensor suite.

slg
You are linking to an article describing the 2nd hardware version. They have since released version 2.5 and today’s demo was about version 3.0 which is in current production cars.
tomjakubowski
> It gets more daylight and sunlight than any other major city in the US.

I think every place on Earth gets just about the same amount, ~182.5 days, of daylight in a year, right? (Not accounting for solar eclipses, or that the earth's spin is slowing)

Phoenix having relatively sunny and dry weather is a point taken though.

slg
I phrased poorly. Daylight does vary by location, but that wasn’t specifically the point. Phoenix combines one of the sunniest cities in the country with an average day length that is near the most ideal for the average workday due to geography and no DST. If you work 8am-5pm, your commute will be in daylight for the entire year.
RivieraKid
Some of the rides are unsupervised and they can handle night just fine.
mrep
Is tesla able to perform at Waymos level in phoenix because I would expect them to be showing that off if they could? Let's be real, there has got to be some teslas with autopilot in Phoenix and the fact that they aren't trying to upshow Waymo with their Phoenix autopilot stats shows they cannot beat Waymo.
slg
Autopilot isn't trying to compete with Waymo and therefore any comparison there is irrelevant. Tesla hasn't rolled out FSD publicly so it is unclear whether Tesla would have any data about it performs in Phoenix.
mrep
> Autopilot isn't trying to compete with Waymo and therefore any comparison there is irrelevant

So their autopilot sensors, disengagement data, and algorithms from their autopilot cars aren't being used at all for their FSD program whatsoever?

> Tesla hasn't rolled out FSD publicly so it is unclear whether Tesla would have any data about it performs in Phoenix.

Isn't that because they are using their autopilot program + the people who upgraded to the FSD tech sensors to train their FSD tech under the previously mentioned autopilot program?

slg
Yes that data is used to train the FSD tech, but that doesn’t mean Tesla has FSD stats that are would mean anything in comparison to Waymo. This is neither a positive or negative sign for either company, they are just approaching things differently and therefore they can’t be judged against each other easily.
wrs
From what I can see, overselling is Elon’s standard strategy for whatever technical approach he’s decided is the right one. (And I can’t really say that doesn’t work for him.)
kermitismyhero
In most legal jurisdictions, lidar in consumer devices requires special permits from health and safety agencies. It's such a new tech in consumer devices that regulatory approval timelines can be highly unpredictable. And if there's one thing that Elon Musk loathes, it's unpredictable regulatory approval timelines. Plain old videocameras don't require any sort of special licensing in the vast majority of legal jurisdictions.
leesec
A better question is, how come any of the companies using Lidar haven't shipped anything as good as Autopilot given that it's so much better?
krzkaczor
As far as I understood HD maps are static data obtained by many various ways (not only lidar)[1]. AFAIR they claimed that Lidar itself provides some value (teaching neural networks how to measure the distance to various objects etc) but the gist is that image recognition is an absolute must-have for self-driving cars (which seems right to me).

1 - https://medium.com/@LyftLevel5/https-medium-com-lyftlevel5-r...

salty_biscuits
Apart from the map data going stale as it gets out of date, you always have the problem of registration of the vehicles position within the map. GNSS data can have a really long tailed error distribution (particularly within urban canyons and in tunnels). So you always need some way of making a local map and placing it within the high detail global map. There is a pragmatic argument saying if you already have a detailed local map, do you really need the global map apart from high level navigation tasks.
cromwellian
That's not how maps are used, and the map can be used as purely negative signal if you want, as in "no go" areas instead of "yes, this is safe".

As it stands, Tesla's vision only system has no other backup system to tell us what is off limits.

salty_biscuits
Can you elaborate a little about how the maps are used? The "go/no go" is basically the sort of thing I mean by "high level navigation", the sort of thing a satnav system would provide help with. If they have nothing like that it seems insane. I guess there is an ambiguity around how high detail is "high detail".
cromwellian
Neural networks are probabilistic. They output predictions and are rarely 100% confident. So when Tesla's visual neutral networks are assigning inferred labels to things like cars, bikes, pedestrians, road barriers, etc, there is always some chance, say, 0.1% that it's wrong.

If you have other sensors, like LIDAR, radar, and high resolution maps, then you can corroborate your predictions, so for example, if the visual system predicts a 99.9% chance you can make a safe right turn, checking the LIDAR and map will provide additional checks, even if your map is out of date. Consider:

1) camera system predicts you can make a right turn. You have no map, so you make it. 2) camera system predicts you can make a right turn, you have a map, the map says there's a wall there. You don't make the turn. If the map is wrong, you're still safe, you just missed your turn off. 3) camera system predicts you can make a right turn, you have a map, the map is out of date and says there's no wall there, but a new barrier has been erected and the map doesn't contain it. This is equivalent to situation #1 4) Same as #3 but you have SIDE RADAR, so you sense the barrier 5) Same as #3, but you have SIDE LIDAR, so you sense the barrier

The problem is, Tesla is not only going "all in" with their "we only need video cameras", but Elon Musk was asked directly at intersections if the lack of side radar was a problem, and he again fell back on "video is all you need", but if that's the case, why even have front radar?

Clearly, video is not enough, and if you have front radar, there's no reason to argue against side or rear radar, and by proxy, against LIDAR.

All of this comes down to the Tesla is shipped with a cheap and fixed sensor suite, they can't change it now, it's too late, and so they are trying to justify their previous design decisions with overconfident sales tactics and PR, and I think this is somewhat reckless and dangerous to the entire AV industry because the more Tesla's that crash and kill people, the more all AVs will be tarnished as dangerous by the public.

Musk is acting like Tesla is SpaceX and they can afford to blow up rockets or astronauts during iteration, but these are consumer products. People will trust these things to be safe, not to be "beta"

salty_biscuits
Ok, lots of separate issues there. In terms of actually using a map I think my original point still stands. You need to use online navigation information to localize yourself within the map before you can use it. There is also the non trivial issue of actually doing a bayesian update of "can I go around this corner" given the error budget coming from how accurate is the map, how accurate is my position within the map and how accurate my other sensors are. Weighting all these probabilities and actually updating the state in a principled way is a bit of a nightmare (given the long tailed and multimodal distributions flying around). In a well principled controller the lack of certainty of the state should simply result in a more conservative control action. In your vision only scenario the correct control with high uncertainty is caution. i.e. slow down and don't turn until it can work out what is going on. This has nothing to do with sensors. It is possible to be safe with poor sensors but it will probably result in an unacceptable closed loop performance (will drive like grandpa). A big problem here is that it is really hard to get this stuff right. Probabilities will not be actual probabilities and solving the correct control problem is intractable in real time (i.e. actually solving the stochastic program for the optimal control action). Engineering problems always have constraints, I don't have a problem with a limited sensor set. Sure it is always better to add more sensors but money is a problem. So are logistics. I do have a problem with doing this hard problem badly and practicing in public. Really difficult problem to solve, being cavalier probably isn't the way to go.
cromwellian
Visual systems have problems with obstructions and adversarial imagery, having side facing radar which can go under and around obstacles at long distances for example is a huge benefit at intersections.

Musk swept this simple fact under the rug when asked during the press conference, again overselling a 100% visual solution.

torpfactory
Does anyone know how Lidar would work in the future when many other vehicles are also using lidar?

From working with some different types of photo sensors (CCD, avalanche photodiodes, photomultiplier tubes), you might imagine that incident light from another car's emitter would be much brighter than the reflected light the lidar sensors are reading and cause saturation.

toomuchtodo
Phase lock or introducing jitter to laser timings. Check out Animats' comments on this topic, it's where I've learned from regarding this.
torpfactory
Thanks!
CamperBob2
Probably the same way GPS satellites avoid jamming each other: modulate the lidar transmitter with a unique coded sequence that can be detected by correlation.
salty_biscuits
I've only got experience with aerial survey lidar, but "blinding" the photomultiplier with your own transmit pulse is pretty common, e.g. hitting something retroreflective like a cats eye marker or a street sign. Even the sun off a puddle can do it too. It is a pretty obvious artifact to recognize and remove. I guess it would depend on the mechanism of the lidar (i.e. if it is scanning then what is the probability that the different cars are scanning at each other at the same time, probably low). Might be a problem if they use flash or geiger mode lidar. But then the pulse rate would be lower. Interesting to think about. I'm sure the people working on it would have an answer.
tjoff
Does anyone doing lidar not also do visual recognition?
IshKebab
Yeah I don't see how it is either/or. Autonomous driving is so hard surely you need all the sensors you can get! Maybe when it is working really well you can optimise the cost and only use cameras, but we are decades away from that.

I suppose if you want to use customers as guinea pigs you do have to have a low-cost solution first, which is why Telsa are advocating it. I'm sure if LIDAR were cheaper I'm sure they're be using it.

sairahul82
I don't think any one is doing this. This is the point Karpathy is making. Vision is essential and needs to be perfect and also has all the information to recreate the 3d environment. That's why lidar is not required and is expensive.
Fricken
Vision is probablistic. You can improve your probabilities with more data and deeper nets, but the probability is never 100%. Inferring depth using triangulation of extracted features using vision is also probabilistic.

Lidar gives you very rudimentary depth and positional data, but it is precise, and reliable, and the false positives are low. Lidar is not seeing the world as a series of probabilistic phantasms that may or may not be a pedestrian, or a jersey barrier, or whatever.

Elon said: "Lidar is lame! Lidar is Lame! lame lame lame lame."

But then he later said: "We've gone over this multiple times. Are we sure we have the right sensor suite? Are we sure we have everything we need?.... No."

Lidar is a crutch and we'll see how hard Tesla is limping in a couple years when Elon is claiming They're going to have Robotaxis.

To Tesla's credit, it is theoretically possible to develop a fully autonomous vehicle without needing Lidar, but before you can deploy, you need to integrate Lidar, and it can be retrofitted as a largely independent system, necessary for it's enhanced redundancy, but not needed for informing the planner, except when the Lidar is seeing things the vision system isn't.

The presentation was a pretty good run down on various techniques and methods that are being utilized by different autonomy developers. The downside is that everything is framed as a rationalization as to why Tesla has some kind of secret weapon, when they're mostly doing conventional things, and not all the conventional things you need to build a complete autonomous vehicle.

I was waiting for Elon or Karpathy to trot out the magic pixie dust that will have Tesla's driving empty in a couple years, but that was nowhere to be found.

maccam94
I believe he claimed that they did have all of the sensors they need with hardware v2, and they had no need for LIDAR. We'll see if that's true. One thing that stood out to me is that he said they only need radar in front because you're only moving fast in that direction. Does that mean the cars will be susceptible to T-bone accidents?
Klathmon
>Does that mean the cars will be susceptible to T-bone accidents?

Is there much of anything a car can do to prevent being T-Boned?

And even if there is, I don't really see it being something that radar would help much with. If you and another car are approaching perpendicularly, you'll have an almost perfect situation to get positioning and speed information over a pretty short amount of time.

Of course I've never done anything even resembling any kind of work in this space, and I don't want to pretend to know the answer, but the claims of side radar not being necessary seems to pass a smoke test.

Animats
Is there much of anything a car can do to prevent being T-Boned?

Yes. Stop suddenly while entering the intersection when conflicting traffic is detected. Google cars do that, and they have been rear-ended several times because of it. You can find those events in the CA DMV accident reports. Better than being hit by the cross traffic.

Robotbeat
There are vision systems on the side. The car primarily relies on its vision system, like humans. The radar is a check, not the primary.
Fricken
I don't know, I guess it will be born out in the data, but I hope they aren't seriously planning to drive people through busy intersections where T-bones are likely to happen under the auspices that what they've got is a level 3/4 system.

Waymo's safetly critical disengagements are at 1 every 11,000 miles. Fender benders happen about once every 100,000 miles, so just on that Waymo is off by an order of magnitude, and Waymo spares no expense on compute or sensors.

grey-area
Over how many miles?
salty_biscuits
"Lidar is not seeing the world as a series of probabilistic phantasms that may or may not be a pedestrian, or a jersey barrier, or whatever"

Lidar is inherently probabilistic too,like every sensor is. Just because the photomultiplier voltage rises and some peak detection hardware can correlate it with a transmit pulse, it does not mean that there is a "point" at that position in 3d space. It has different error characteristics to vision sure, but it is still an inference problem with corresponding artifacts to deal with. In fact I would say that the different error characteristics actually make them ideal for sensor fusion, i.e. Lidar for initial depth and vision for filling in the details.

cromwellian
None of the cars using LIDAR are using LIDAR only, they are fusing LIDAR, radar, maps, and computer vision.
metafunctor
I think the point was that lidar is too low resolution. You must have very high resolution (in both time and space) to achieve L4 or L5; to accurately determine if a pedestrian is not paying attention, or if a bicyclist is looking to cross the road.

Not that Tesla is doing any of that right now, they most likely are not. But a lidar is not going to, either. Not ever, which is the point.

I thought the fleet learning aspects Karpathy outlines came pretty damn close to the pixie dust.

robszumski
I haven't seen any mentions of cheaper Lidars, like single axis scanning only, to get some sort of depth map without spinning mirrors or 360 degree positions that ruin aesthetics.

I feel like this would be able to detect trees in the distance and other objects to orient the road/lane detection. And keep it cheap. Possibly add on to existing cars.

TheSpiceIsLife
Search HN for 'solid state lidar'

https://hn.algolia.com/?query=solid%20state%20lidar&sort=byP...

Symmetry
You can achieve L4 before you can determine if a pedestrian is paying attention. Your car will be slowing down when it doesn't have to but it can work and work safely.
rayiner
Slowing down isn’t always an option and it’s not always safe. (What happens when all the human driven cars keep driving at normal speed? You’ve now created an unsafe traffic condition for everyone.)
Fricken
Well, it has to work well enough to be useful. When Waymo reports one disengagement every 11,000 miles, that's just the safety critical disengagements. The total disengagement rate is much higher than that. They are also relying on teleoperators who can help guide the vehicles out of tricky situations, which are myriad and it will take forever for Waymo to build features down to a granular enough level that they can operate reliably enough to compete with Uber.
Fricken
Musk would have you think it's a Lidar vs. cameras debate. It's not. It's a Lidar + cameras vs. just cameras debate. Lidar has significant shortcomings. So do cameras. But together they make for a decent perception system.

With regards to fleet learning, Tesla has been going on about it for years, but more data is not better. Better data is better. Clifornia's disengagement reports have seen criticism because they place an emphasis on the number of overall miles driven as a useful measure of progress, when actually it doesn't mean much. Companies like Zoox, after spending years doing small-scale closed course testing hit the ground running, and with only 10 vehicles and a few months of driving around San Fransisco they were able to take reporters for autonomous rides, and they took a couple dozen journalists to various pre-designated spots around SF with only one disengagement the whole day.

The nice thing about the whole fleet learning narrative is that is makes Tesla drivers feel like they're helping just by driving around.

penny45
It's a radar + cameras vs lidar + cameras debate. It sounds like radar + cameras > lidar + cameras
Fricken
Well technically it's Lidar + Cameras + inertial measurement units + GPS + ultrasonic proximity sensors + Radar vs. all those things minus that one sensor that costs too much.
GeorgeTirebiter
That's correct, it's lidar+cameras vs just cameras. But it should be RF + radar + sonar + lidar + cameras vs anything else. If the 737Max8 tells us anything, it's that one sensor just isn't enough.

EM radiation from UV down to radio has different propagation characteristics, and materials form different attenuators. Radio easily passes through most non-metals with less attenuation than 400-800 THz radiation (light). When I was at Tesla, I tried like heck to get people to internalize this, but the 'cameras + one forward-looking radar is enough' crowd has clearly won.

Heck, RF can shoot through mud, rain, sleet, and snow. Cameras? not so much. Although I agree that the number of 'RF pixels' one gets is nowhere near what a camera provides, it does provide some pixels when the cameras are covered in road muck.

And what about when it's dark? Are the headlights going to provide enough light for the cameras to function well in all road conditions?

Roritharr
You were part of that discussion?

What an amazing position to have been in history, even if it didn't go according to the way you felt (and imo probably are) was right.

leesec
>"We've gone over this multiple times. Are we sure we have the right sensor suite? Are we sure we have everything we need?.... No."

Where did he say this? He repeatedly said they have everything they need to drive.

>but before you can deploy, you need to integrate Lidar,

Why?

>not all the conventional things you need to build a complete autonomous vehicle.

No one knows how to make a completely autonomous vehicle because they don't exist. I'll take the billionaire tech genius' approach, whose gotten farther than anyone in this field, over your armchair opinion, thanks.

Fricken
There's a reason why nobody else in the industry thinks they can make robotaxis work without Lidar. Of course, if you were familiar with the rest of the industry you wouldn't say Tesla has gotten farther than anybody else in the field, that's just ignorant.
leesec
Well I'm a software engineer at an automotive tech company who interfaces with all the big OEM's daily so I'm gonna go ahead and say I'm very familiar. But that's not a reason you know, that's just you diverting.
tjoff
> whose gotten farther than anyone in this field

What they've done so far is a dangerous lane-assistance. From what's seen they are nowhere near waymo. Would hate for tesla and uber to ruin this for everyone else.

obastani
Waymo is almost certainly using a ton of computer vision in its cars. Vision is needed to classify objects on the road, predict the behaviors of pedestrians, bikers and other drivers, read road signs and traffic lights, etc.

As far as I can tell, LIDAR makes a lot of prediction tasks much easier (since you have both images and LIDAR point clouds), and is also a backup for obstacle detection to ensure that your self-driving car doesn't crash.

Someone
I agree with the essential bit, but “has to be perfect”? Do you know how bad human vision is at estimating distances, speeds, angles, etc? I don’t think you need perfect vision to implement the first law of autonomous driving (“Avoid hitting stuff”)

It also doesn’t have all the information to recreate the 3d environment. There’s all kinds of inferences that human drivers make that require a good model of the world, to start with. For example, a car shouldn’t brake for a small object that moves with the wind, even if it looks like it’s made of concrete, or looks like a child.

threeseed
Vision is not perfect though. It is impaired during night time and difficult conditions and can be obscured if dust/dirt gets on the camera.

And not sure who you are but I trust all of the other companies who are doing this a lot more. And they are all using LIDAR.

XorNot
Dirt and dust can also impair LIDAR (it's just light, remember?) as can adverse weather (there's a big question over how well any California-based company will perform in a non-California climate. I suppose my country - Australia - is going to be a big beneficiary of all this).

It's a mistake to think LIDAR doesn't have these drawbacks - its an optical system at its core.

derpherp
Yes, LIDAR has drawbacks too. That's why almost all autonomous vehicle systems "fuse" LIDAR and vision data together. The debate here should be LIDAR + camera vs only camera. Musk has turned the conversation to LIDAR vs camera.
natch
Wow you are commenting, but you didn’t watch the event I guess because that’s not what Musk said.
derpherp
I'm sorry you feel that way. I indeed watched the entire stream and not once do I recall Musk mentioning sensor fusion.

Instead, he just focussed on the drawbacks that LIDAR had, which can be corrected if feeds from LIDAR and camera were to be combined. This is why every other AV company uses sensor fusion. No one uses LIDAR just by itself and Tesla should not use vision just by itself.

je42
i guess the main question is why humans can do it without lidar ? or the other way put: why would a lidar bridge the gap between a current vision-based system and human level performance ?
derpherp
Humans are able to perceive the world due to a bunch of monocular and binocular vision cues (https://en.wikipedia.org/wiki/Depth_perception). Out of these cues, cameras are not able to offer a few. For example, cameras can't offer accommodation as they can't mimic changing focal distances like a human does. Tesla also does not seem to have cameras that focus on different focal lengths as the human eye can.

In addition, even if we can get all the cues, current computer vision techniques are nowhere near as smart to figure out a lot of stuff that the brain can. There is no general AI that can learn from one instance of a test case as the human brain can. Even stuff that is simple to the human brain can trick computer vision (https://globalnews.ca/news/3654164/altered-stop-signs-fool-s...). There is a lot of catching up to do in terms of computer vision.

A LIDAR can help with these cases a lot. It's an extra input that generates a 3D point cloud of the surroundings around it. LIDARs have their own failures but can be combined with computer vision to compensate. This is the central idea behind sensor fusion. Not able to detect if an object is indeed a few feet in front of you due to confusing parallax? LIDAR can help. Looking at a new object that you have no idea what it is cause you have never seen it before? LIDAR can at least tell you how far away it is and it's shape in addition to how fast it is moving.

Now, LIDAR is not a panacea that comes to the rescue of computer vision's failings and solves all of AV's problems but it definitely furthers what a self-driving car can perceive and deliver a higher amount of safety to the end user.

michaelt

  lidar is not required and is expensive.
Given that the only the company claiming that is the one whose cars sometimes hit (or even swerve into) clearly visible stationary concrete barriers in broad daylight [1, 2, 3, 4], I consider that claim far from proven.

[1] 2019 https://arstechnica.com/cars/2019/03/dashcam-video-shows-tes... [2] 2018 https://news.ycombinator.com/item?id=16772748 [3] 2017 https://bgr.com/2017/03/02/tesla-crash-video-texas/ [4] 2016 https://www.thesun.co.uk/news/1402992/tesla-self-driving-car...

ribosometronome
Are there any other vehicles being driven autonomously at the same scale? It seems like it may be an unfair comparison.
naikrovek
It is very much an unfair comparison. There is very little (if any) mention of the number of times AutoPilot has prevented an accident amongst self-driving skeptics.
lozenge
Yes, many other cars have lane following features and adaptive cruise control, they just don't call them "autopilot".
SheinhardtWigCo
This argument misses the forest for the trees. Even level 5 autopilot will sometimes crash into objects that are clearly visible to most humans, as will lidar-based solutions. But if the robot fleet is an order of magnitude safer than the human fleet, then it makes rational sense to prefer a robot driver to a human.
threeseed
Problem with Tesla is that it is crashing into objects no human would crash into. And doing so at high speed.
naikrovek
I dunno. I've seen perfectly sober, intelligent humans drive into some extremely stupid "situations".
threeseed
We aren't trying to build another human.

We are trying to build a car that doesn't crash.

Robotbeat
No, we're trying to build a car that crashes less than humans.

If you want a car that doesn't crash, never build the car let alone drive it. Then it'll never crash.

The point is both reduction in fatalities and increase in human quality of life. Not an impossible perfection.

karakot
really? just google/bing something like "car crashed into a wall/barrier"
FireBeyond
If not actively accelerating at them, like it did in San Francisco.
hermitdev
My car (not a Tesla) has a collision warning system but no automatic breaking, and I wouldn't want it to with the number of false warnings I receive.

The public road right before the drive to my house has an s-turn in it. The CWS goes off about half the time I go through that series of turns just hundreds of feet from my turn. I can see the turn in the road and the guard rail ahead, the car sees a stationary object I'm driving straight towards at around 45mph. If it automatically braked with its current awareness, I might never get home or get rear-ended by people following.

michaelt
I think you might have replied to the wrong post here?

I didn't say anything about humans vs self-driving cars. I'm arguing about self-driving cars with LIDAR vs those without.

AnthonyMouse
> I didn't say anything about humans vs self-driving cars. I'm arguing about self-driving cars with LIDAR vs those without.

The status quo is humans so the first question is whether it can do at least as well as humans even without LIDAR. It seems to be doing that, even if the things it does get wrong are different things than the things the humans get wrong.

The question of LIDAR vs. not is a separate issue which is purely cost/benefit. Even if LIDAR has some value, it remains to be seen whether that value is ultimately more than its costs, in the same way that we know how to make cars safer by making them weigh 8000 pounds and yet in general we don't do that because the cost is more than the benefit.

SheinhardtWigCo
Your post suggested that missing any obvious obstacles casts doubt on Tesla's claim that lidar isn't necessary, but I'm saying their claim will indeed be proven true if the overall safety level of vision-and-radar autonomy is far better than that of humans, even if they sometimes crash in ways that seem obviously avoidable to humans.
rohit2412
I'm getting tired of saying this.

Why is it one or the other? Why only a human driver or a robot driver? Could you have a human driver in charge with an accident preventing robot, getting the best of both worlds? If the human falls asleep, the robot can do trivial driving, and the human still makes the more difficult driving decisions.

The question isn't if robot car is better than an average car being driven by a human. It's if the same robot car would be safer if the human was driving it, and the robot in accident prevention mode. And I don't think that is going to be true in any near future.

CamperBob2
Human drivers already space out, check their Facebook, text their friends, and otherwise lose track of what's going on without having cars that claim to do most of the driving autonomously. The more autonomy you add, the more drivers will take advantage of the opportunity to do something behind the wheel besides drive.

Autonomous cars are one of those things that really need to be done right or not at all. The notion that human drivers will be ready and able to take control at a moment's notice is nothing short of laughable.

IMO, about the only way partial self-driving cars can work would be to delegate it to a remote building full of 'drone drivers' somewhere. Heck, I'd pay for that today, just out of laziness.

MarkMc
> Could you have a human driver in charge with an accident preventing robot, getting the best of both worlds?

I'm not sure that human+robot would be better than robot alone. Eg. The human is in control and starts moving close to the adjacent lane without indicating. Does the robot (a) assume the human driver is changing lanes and so turns on the indicator light; (b) take control of the car and move it back to the center of the current lane; (c) do nothing and let the driver change lanes without indicating? The ambiguity in the human driver's intentions means the robot might make a decision that is less safe than if it was in complete control.

rohit2412
That's a fair argument to hold, that's why I limited the robot to only intervene when accident is imminent.
throwaway34241
Sure, the way I'm imagining it is some sort of collision avoidance system, where the person drives but the computer can also put on the brake? It seems like that could be safer (although it could still cause an accident by braking in certain situations like in an intersection). If the self-driving system does other things to reduce accidents like leaving greater margins of safety for other drivers (by stopping less suddenly, pulling out into intersections differently, etc), it seems like that would be harder to integrate with a human driving, instead of having the self driving system take over.

A lot of the utility of self driving cars is not just increased safety. It's allowing disabled people to be more independent, reducing the need for car ownership with (less expensive) self-driving taxis, reducing the need for parking lots and allowing that land to be used differently, etc. Do you think self driving cars should be prohibited as long as they are less safe than the standard set by human+collision avoidance cars?

I don't think that makes sense unless cars with human drivers and no collision avoidance are also prohibited. Even then I'm not sure. As a society we know that driving is not totally safe but allow people to drive anyway because it's a useful tool. I don't see why the safety standard should be even higher for self-driving cars, which are even more useful, especially for people who can't drive.

One last thing is that anything that slows the adoption of self-driving cars will probably lead to a much greater number of old, less-safe, cars in use for a longer period of time, since it's very unlikely politically that the existing car stock will be just kicked off the road, regardless of what regulations are set for new ones.

rohit2412
> Do you think self driving cars should be prohibited as long as they are less safe than the standard set by human+collision avoidance cars?

If having a human in the car reduces the accident rate, then I propose making it necessary for the human to be in the car. Are you advocating that we make roads unsafe so that disabled people be able to take taxis more?

> I don't see why the safety standard should be even higher for self-driving cars.

I see it as the opposite. The advocates for driverless cars are advocating lowering standards. For me the standard is simple. If the same car is as safe driving itself as it is assisting a human driving it, then it can drive itself. You can only remove the human if the human adds no safety in the same car. Please do not compare to "the average car on the road".

I see there are positive externalities to cheaper taxis. There are really negative externalities too, in terms of traffic, urban sprawl, and carbon emissions. But I don't think we should deviate from the topic of safety for those externalities

throwaway34241
Safety is important, but I don't think any amount of safety is worth any amount of negative externalities. I mean there's things we could do right now that would make things more safe. We could ban everyone with older cars from driving (since they don't have as many safety features). We could ban everyone from the roads that has a somewhat-higher than average accident rate. We could ban all people past a certain age. If safety is always more important than access to transportation we should do those things.

I disagree with those, so I also disagree with the premise that having a human in the car should be required for any amount of increased safety. I think the amount matters. And keep in mind we are only talking about safety increases from present day, you're just proposing to ban them in order to achieve a greater safety increase.

My >90 year old grandmother still drives. She hasn't run anyone over, but I imagine her reaction time isn't the best. Apparently it's common for cops to be lenient with elderly folks who are known bad drivers, since in many areas they wouldn't be able to get groceries or go to the doctor otherwise. I'd argue that maintaining the status quo (by having a blanket policy) for many groups of drivers would probably be safety theater instead of anything real.

Lastly, I touched on this in my earlier post, but if you care about deaths it's very important to consider what cars are actually being replaced in addition to individual car statistics. Imagine in scenario A, 10 million older cars are replaced by 10 million driver-assist cars, and scenario B where 30 million older cars are replaced by 15 million self-driving taxis. Assume the driver-assist car is safer than the self-driving car, which is safer than the older car. Even so, the driver assist scenario may have thousands more deaths, since the uptake rate is different. And this is without taking into account any safety-improving software updates, which are very difficult to estimate but may be very significant statistically.

A simple per-car comparison may lead to all of the negative externalities and thousands of more people killed. I really do not see how that outcome could be justified in the name of "safety". At the very least, even if safety is the first and only concern, policy makers should try to minimize vehicular deaths which requires considering things at the fleet level.

AsyncAwait
Not that it's relevant, but as someone who can't drive due to a disability, nothing short of Level 5 autonomy basically doesn't change the calculus for me at all.
chris1993
This basically describes the Toyota Guardian system
shafyy
Watch Lex Fridman's interview with Musk (https://www.youtube.com/watch?v=dEv99vxKjVI), where they discuss exactly this topic.

Basically, Musk's point is that if self-driving cars are statistically safer than humans, then allowing humans to drive will make cars less safe again.

Furthermore (regarding some other children here), the study that Lex and his team has done show that drivers are not less engaged while Autopilot is on (as many critics say).

rohit2412
> if self-driving cars are statistically safer than humans, then allowing humans to drive will make cars less safe again.

You mean "if driverless cars are statistically safer than humans". It shouldn't be a surprise that driver assistance decrease accidents. But Autopilot isn't a driverless system, and neither is there satisfactory proof that it is safer than a human.

gthtjtkt
> Could you have a human driver in charge with an accident preventing robot, getting the best of both worlds?

We already have this and we've had it for years. What do you think automatic emergency braking is? Adaptive cruise control? Lane keeping assist?

> The question isn't if robot car is better than an average car being driven by a human. It's if the same robot car would be safer if the human was driving it, and the robot in accident prevention mode. And I don't think that is going to be true in any near future.

No clue what you're trying to say here. Driver assist features don't improve safety?

rohit2412
So call them driver assists.

I am tired of seeing driver assists being sold as driverless systems.

SheinhardtWigCo
Do you think people buying cars with autopilot believe they are getting fully driverless systems?
eanzenberg
yes
rohit2412
People buying current cars were just told that their cars will earn them 30k$ a year. Is fsd driver assistance or driverless?
natch
Elon called the 30k per car annual revenue, not earnings. These are vastly different concepts.
mzs
The livestream is gone though https://youtu.be/Ucp0TTmvqOE
Maxious
Other channels have recorded it https://www.youtube.com/watch?v=tbgtGQIygZQ
bogomipz
>"People drive vision only."

Can you elaborate on what you mean here? Do you mean that Tesla will only be using Computer Vision for autonomy? Doesn't or wouldn't Lidar compliment CV? Or is there a practical design consideration that makes them mutual exclusive?

leesec
How many technologies do you see people working on where they say, "well we have one approach we think is gonna work, but we're also going to spend a ton of resources and time developing this other solution that would be useless if the first approach does work. Oh and it would make our cars 5k more expensive atleast."
emef
Tesla has take a firm stance against using LiDAR and is doubling down on CV. I'm not sure if they have radars for parking assistance, but they are using cameras as primary sensor for perception and localization.
threeseed
That people drive vision is so idiotic.

Humans have far higher resolution sensors and the most advanced computer ever seen behind them. And even that fails at a far higher rate than we want self driving cars to be.

XorNot
Conversely computers don't get tired, or "distracted", or drunk. They're also faster - human cognition is running about 150-200ms behind what you actually see. Some reflexes are running faster then that, but overall the human platform is not being used with that super-computer in full-control while driving at high speed - our hardware isn't capable of it.

The thing we most bring to driving is navigating complex, low speed environment changes - but we're not great at that either (see the number of toddlers run over in their drive ways, for example).

X6S1x6Okd1st
The resolution outside of the fovea is pretty poor. It's a little unclear what portion of accidents we could cut out if we had human level performance, but never let the attention waver or disallowed reckless driving. Judging from NMVCCS it'd cut out somewhere between 34 - 75% of accidents.

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

lutorm
That's not really relevant though because people don't stare at a fixed point, they trade time for resolution over a larger area.
bogomipz
>"And even that fails at a far higher rate than we want self driving cars to be."

Thanks, but did you mean to say that people fail at a far lower rate that we want self driving cars to have? Maybe I misunderstood your point?

gjm11
I'm not threeseed, but I'm pretty sure they meant what they said.

Expanded version of the argument: "Karpathy says 'people drive vision-only' and apparently intends that to convince us that vision-only is good enough. But (1) those people driving vision-only are using human brains, whose abilities we have not yet come close to duplicating, and (2) even with the astonishing abilities of the human brain, those people driving vision-only make a lot of mistakes and have a lot of accidents, and we want self-driving cars to be much safer than that. So the fact that people drive vision-only is no reason to think that self-driving cars should do likewise. They're trying to be safer than people, with less computing power; why shouldn't we make up for that by giving them extra sensors?"

bogomipz
Thanks for expanding and clarifying the argument, this makes sense. Cheers.
ajross
> Humans have far higher resolution sensors

This is really not true. The phone in your pocket is just plain better. More total pixels per unit area, much greater dynamic range, able to sample over time periods an order of magnitude faster.

The reason your vision seems better is because your brain is amazingly good at synthesizing a picture of the world around you. But all that data (the sphere around you is something like 150 MP at eye resolution) is an illusion. You're only looking at a million pixel(-equivalents) at once, thereabouts.

[1] Your eyes come close only in the middle half a degree of your fovea. Everywhere else your brain gets a blurry mess and has to extrapolate.

eanzenberg
Take a photo with any camera at night and compare with what you're seeing with your eyeballs.
ajross
Have you... actually tried that? Night shots with a decent camera sensor work much better than your eyes. You're probably being fooled by the exposure control on your junky consumer phone app. Grab one with manual control over exposure/iso and sample rate, you'll be shocked.

It's not even a secret. Light capture in a phototransistor[1] is a basically direct process without a whole lot of loss or wasted surface area. Those cone cells are living things and only have a little volume to dedicate to pigment chemistry.

[1] CCD's of course can do an order of magnitude better still, and those are easily cheap enough to put in a car.

eanzenberg
How are you arguing that ccd’s are better than human eyes? Take video at night and compare. Why do you think hollywood, with their 100k+ camera rigs, still need to add artificial light everywhere especially night scenes? Eyes are incredible at dynamic range, resolution, and response rate. They have to be due to evolution.
pedrocr
If you want to see how consumer cameras can take night video much better than your eyes look up Sony A7S video. Modern cameras have such clean output at high ISO that you can turn night into day with great quality.

Modern consumer cameras have far surpassed what our eyes can do in all the metrics you describe (dynamic range, resolution, response rate) and many others. It's not even close.

naikrovek
I agree that, ultimately, vision and some IMUs will be all that's needed for an as-perfect-as-possible self-driving cars.

Your statement on phone cameras being better than the human eye isn't true today, though.

The human eye has FAR better dynamic range than any camera built to date, at any price point. This matters a great deal on sunny, cloudless days in a city with tall buildings casting dark shadows, for example.

Just like how the biological system of "stupid sensor, smart thinking" helps biological organisms, that's what is going to need to be done for computer vision as well.

About 1/3 of the human brain is dedicated to vision processing in some way. Think about that a second. One THIRD of the best, most powerful organic computer known is required for us to see what we see and we are still fooled by the things human eyes can be tricked by. It's going to take a lot of neural network training to duplicate that. Fortunately, the vision skills required to drive are a subset of overall vision capabilities.

tigershark
It’s absolutely false. The eye simulates the dynamic range by continuously changing the pupil size. If you take a camera with a lens that can go from f/1.4 to f/32 you have a much much bigger dynamic range counting that you can also modify the exposure time. Your comparison between the eye with his changing pupil size and a single image taken at a fixed aperture value and a fixed exposure is totally incorrect.
ajross
> The human eye has FAR better dynamic range than any camera built to date, at any price point.

Not at all true. All you need to do is take your camera into a dark room for proof. It can take useful pictures in environments you can't see. And with some manual control and safety precautions (seriously, don't actually do this) you'll note you can shoot useful photos of things that are very near bright sources like the sun where your eyes would be completely useless (and irrevocably damaged, again do not try this).

What you're complaining about isn't dynamic range, it's exposure control. A human brain, again, makes much better decisions about what parts of the environment are "important" when setting the aperture (iris). So the stuff you want to see is visible, where the camera is mostly just going to guess that the center of the frame is what you want and will routinely leave stuff over or underexposed.

Your last paragraph has the correct analysis but the wrong conclusion. It's the vision processing that makes the difference. The optical systems of a semiconductor camera absolutely are better, so a control system based on optics can absolutely be better.

pedrocr
You're swapping dynamic range and exposure control. Your examples of dark rooms and bright light sources are examples of great exposure control, not dynamic range. Dynamic range is the difference in light intensity from pure black to pure white in your sensor. Modern sensors are also extremely good at dynamic range though, much better than film and apparently much better than our eyes as well. From a quick search our eyes are at 10 stops of range (2^10 ratio from white to black) and modern sensors have gone past 15 stops and are pushing on 20 apparently, which is just amazing.
naikrovek
Dynamic range is the ability to see things in both direct sunlight AND in dark shadows at the same time. You need to be able to see pedestrians that are both directly lit by the sun and in the darkest shadows in the same frame of video.

No long exposures, no multiple exposures with varying exposure -- a single frame. I don't know of any 24-32-bit per channel sensors, and that's what you'd need.

We agree on vision processing. I thought I made it clear that it's a strong "backend" to vision (the brain or compute behind it) that makes it work so well, and that will be the case with good self-driving cars as well. I must have misstated something.

ajross
You're fooling yourself if you think your eyes can do that. They can't. More to the point, your brain is fooling you, by synthesizing a perception of a reality, the same way it does to make you think the stuff in your peripheral vision is as sharp as the things in the middle.

Vertebrate eyes don't have 24 bits of sensitivity, that's just insane. Rod cells are neurons, they either fire or they don't, can fire at most at about 10 Hz, and you have about a million of them at most across your whole field of vision. Do the math. What you say isn't even physically possible.

naikrovek
You're correct that the brain fills in a lot that isn't there, making you think your vision is a lot better than it is. It doesn't change much, though, mostly it fills in gaps.

10Hz? Simply not true. Fighter pilots can identify aircraft shown on a screen for 1/250th of one second. Regular schmoes can see the difference between 30Hz and 60Hz video easily.

I am not arguing about pixel count of a camera vs the center of vision of human eyes.

I'm saying that... Nevermind. I've already said it several times and apparently you have a PhD in all things vision.

Robotbeat
Being able to recognize aircraft on a screen for 1/250th of a second just means the human eye/brain has persistence, not actual useful 250Hz bandwidth.
pedrocr
> I don't know of any 24-32-bit per channel sensors, and that's what you'd need.

Dynamic range doesn't require bit depth. You can have an 8 bit sensor with 20 stops of dynamic range. You'll lose color/luminance resolution of course but as long as 255 is capturing a light that's 2^20 times stronger than 0 that's 20 stops of dynamic range.

But more than that theoretical point there are already sensors pushing well beyond what humans can do in terms of dynamic range. Apparently humans have a respectable 10 stops and sensors are already in the 15 to 20 stop range and pushing beyond it:

https://www.eoshd.com/2014/11/new-sony-sensor-21-stops-dynam...

To map it into 16bit values you can just use a curve to distribute the bit depth unevenly across the dynamic range. Older DSLRs did that to get perfectly usable images with just 10 bits per channel.

DeonPenny
It was great including his explanation of how Tesla drives in snow and rain which is hard to understand how a car can do it.
ibeckermayer
> People drive vision only.

No we don't. We have vision which we then interpret with our _minds_. We have the ability to _understand_ what we are seeing, reason from it, and make decisions accordingly. "Deep Learning", no matter how big your dataset, fundamentally cannot possess this capability. Which goes a long way into explaining why Tesla's "self driving" tech keeps on killing people in situations that are trivially simple for humans. There's no way to rigorously test for edge cases in opaque deep learning algorithms, nor is there any way to implement redundancy as there is in ordinary deterministic systems.

This is a mad attempt to replace human cognition with "AI".

lojack
I question how much of our driving is logic and analytics based as opposed to instinct and reaction based.
paulrosenzweig
> "Deep Learning", no matter how big your dataset, fundamentally cannot possess this capability.

Why not? I’d agree that current models can’t, but what’s the fundamental shortcoming of deep learning?

ibeckermayer
The fundamental shortcoming is that there can be no understanding, and therefore no judgement, because there is no mind. See Searle's [in]famous Chinese Room Argument (https://plato.stanford.edu/entries/chinese-room/). Or for a more concrete example from today: https://twitter.com/mit_csail/status/1121447605412741120?s=2...
sytelus
Karpathy is great at presenting executive-101. However self-driving is much much more than image classification. You have a time component, you need to integrate information as it comes in to meaningful state (aka SLAM) and make sequential decisions that has potential to kill people. This is still unsolved research problem unlike punny ImageNet classification. Karpathy could have easily spent entire day talking about current state of the art and how they have improved on it but instead we got Deep Learning 101. I hope he didn't intentionally used this tactic to avoid talking about the elephant in the room. It seems they are hell bent on vision-based end-to-end learning which vast majority of experts would agree that is far away in the future. So this whole thing is quite bold and if they really make this happen in 1yr3mo then I'd say you should dump your entire 401K in buying Tesla stock. Collecting data is actually more trivial part of this problem.
p1esk
This is still unsolved research problem

Luckily, Karpathy is pretty good at solving research problems.

Karpathy could have easily spent entire day talking about current state of the art and how they have improved on it

Why would he do that? To help Uber?

Collecting data is actually more trivial part of this problem.

For Tesla, yes. For all others that’s the hardest part.

Upvoter33
> Luckily, Karpathy is pretty good at solving research problems.

Oh come on - there are a lot of people good at solving research problems, but hard problems remain unsolved. To pretend driving autonomously in the real world is anything but very very hard is naive.

eanzenberg
Maybe when Tesla cars stop driving into medians at full speed...
adrianmonk
When they say that collecting data is the trivial part, they mean that (in comparison to the other parts of the problem) it is easy for everyone to solve. In other words, they are saying Tesla has an advantage in area that doesn't matter.
tomp
It’s “easy” in the sense that reusable rockets are “easy” - they’re possible, but noone else is working on them.

Musk was way ahead of time, again.

codesternews
He is a really confident guy for a person came from tech.
alyx
Good overview, I share your opinion.

Tesla is betting BIG on NNs.

In the presentation they say that we have NNs in our brain. Which is why we can look at one photo of a dog, and distill enough identifying information in order to spot the breed from then on. Training sample of 1.

Curiously though, they than say that NNs don't work like our brains do, and require A LOT of training data from all angles, etc. Training sample of N.

The presentation doesn't actually show, how they plan to train a single NN to encompass all the knowledge for self driving. In particular, the looooooong tail. But maybe they'll have independent models, say based on "observed" conditions, which swap in and out ... based on more NN logic.

But at the end of the day, there is no known (to me) examples, of NNs even approaching human ability. Tesla is bascially hoping we will believe, they will be the first to show this capability of NNs.

Let's wait and see, 2020 is not that far away.

cocochanel
Why do we usually assume that we have NNs in our brain? Let's take the example of the iguana given by Karpathy –despite seeing the animal for the first time, I am able to recognize it in subsequent instances. He goes on to explain that our neurons do the pattern recognition in the visual cortex, which is the standard way to brush off that assumption and continue training the models. I am still not convinced that NNs are the solution for self driving.
threeseed
Going to guess that you're new to this.

So deep learning via neural networks is pretty much the only way to implement self driving cars these days. It's what everyone is using and is used also in FaceID and other recognition systems.

So Tesla isn't betting big on it any different to all of the other self driving companies. We wouldn't say Facebook is betting big on PHP just because they use that technology. And Tesla definitely isn't in the top tier compared to say Waymo.

Plus talking about neural networks approaching human ability is just marketing buzzwords. It doesn't mean anything tangible since we can't define our abilities in terms of models with hyperparameters.

monk_e_boy
> require A LOT of training data from all angles

Also, we learn things outside of our cars. I ride a bike, this gives me insight into what bike riders will do on the road.

I have kids, so I know a kid on the pavement will be more risky / unpredictable than an adult.

I drove down a single track road today and met another car coming towards me. We stopped, then he started to reverse, there was a passing place behind him. Another car pulled up behind him, so I then reversed back into a field to let the cars past. This sort of thing happened maybe six times today as I drove around. Not at all surprising, not even noteworthy enough for any of the drivers to acknowledge even a flick of the finger in thanks. Just part of driving here.

dagw
It would be interesting to see if kids who spent a lot of time riding bikes or even just spent a lot of time in cars can learn how to drive well quicker than other kids.
xkcd-sucks
Hmm maybe the problem is actually interacting with humans naturally, on the road, so self-driving is really human-style general AI
threeseed
This is actually the aspect that really doesn't get discussed enough. Imagine these scenarios:

a) Roadworks and a guy tells you to stop or waves you through. Or asks you to drive slowly. Or points to a direction that he wants you to go.

b) Woman on the side of the road is waving their arms about because her small child is crawling on the road trying to chase a toy.

If self driving cars can't look another person in the eye and identity their intent then there are going to be a lot of unforeseen problems.

TimTheTinker
Cars and humans inhabit mutual space to such an extent that perhaps level-5 self-driving wouldn't be enough. What we need might be a general AI that can comprehend its surroundings and connect them to a symbolic hierarchy of intentions well enough to obey Asimov's 3 laws.
zaroth
What's the more interesting question is not whether there will be corner cases where self-driving fails, but when the average performance is so much better than humans that AI saves 10s if not 100s of lives every day, but that's still a reality where AI driving software could be actively out there involved in the deaths of a single digit number of different people a day or week.

Do you deploy that build or not?

monk_e_boy
For sure deploy if the numbers are right. It's the edge cases that are interesting.

Cameras are fine in the day, but when I struggle to see at night, in the rain, with spray from cars, glare from on coming traffic... that's where I would LOVE to have a lidar assisted heads up display. How Tesla think they can do better with just cameras compared to another car with cameras AND lidar just baffles the mind.

sliken
Not better in absolute terms, but better in terms of the best car for a certain price. Sure a $200k car with redundant LIDAR and horribly inefficient aerodynamics because a giant spheres on top (like weymo) might be better than a $50k with just cameras.

But the hard part is the software, and it's unclear that the added complexity of lidar + camera will allow the software to be better than just cameras.

As an example a recent study at Cornell shows that a stereo pair of cameras mounted high up behind the windshield provided similar quality 3d data to LIDAR. Search for "Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving" or the more consumer friendly "Elon Musk Was Right: Cheap Cameras Could Replace Lidar on Self-Driving Cars, Researchers Find".

Seems much easier to work on a stereo video stream that includes 2D, color, and extracted 3D features than trying to achieve sensor fusion with lidar + video. Especially if you want to make full use of color for things like brake lights, traffic signals, color/markings of an ambulance, school bus, lane markings, and other important details that are invisible to LIDAR.

Especially consider that if weather is terrible and the camera vision is so bad that only 5mph is safe. If LIDAR can see significantly better, do you allow it to drive at 50 mph because it can see 200 feet? For just social reasons it seems like driving as much like a human as possible is going to be best until all cars are autonmous.

username223
This is why I think the solution will be something like "SDV lanes" -- separate lanes for robots on the highway, like bus lanes. Current roads are designed for humans communicating with each other, and if we wait for general AI to implement self-driving cars, we'll be waiting a long time.
RaceWon
> Hmm maybe the problem is actually interacting with humans naturally

Almost every driver has a rythym (or pattern, if you will) to the way they drive; and we pick up on this, and interact with them accordingly on the road. Even most bad drivers have a pattern to the way they drive. The more experienced you are, the easier it is to "read" fellow motorists, and safely interact with them on the road. One of the most difficult scenarios is when you come upon someone who drives with No rythym or pattern whatsoever. You have no clue what that person is Likely (Emphasis on likely) to do next.

I have no experience sharing the road with autonomous vehicles, but I suspect they will Have No rythym to the way they drive, or rather none that is discernible to a human. IMO this will be The major obstacle to overcome if humans and bots are to share the Same road at the same time.

Nov 17, 2017 · Fjolsvith on Tesla Semi
Elon says that the truck will be charged by the end of the 30 minute DOT required break (14:06 into the livestream: https://livestream.tesla.com/).
s0rce
Is that possible? Assuming 500 miles of range at 2kwh a mile is a 1000kwh battery, charged over 30 minutes, is 2 megawatts, if the voltage is the same as the supercharger (480V) requires 4000 amps...
brianwawok
Google says your average powerline carries 10k amps.. so it seems within reason, but it would require some power upgrades ;)
Faaak
Power = amps × voltage . So no. They are 10kA × 3kV (well, depends on the line), so that gives us 30MW.
SigmundA
That's with a megacharger not a supercharger, there are no megachargers deployed anywhere yet. Megachargers are 1600 kilowatts vs 120 kilwoatts for a supercharger.

A charging station with 10 megachargers going at the same time will draw as much power as a small city of say 10,000 homes.

DKnoll
A municipality with only 10,000 homes isn't even approaching city status, more like a small town... but I get your point.

https://en.wikipedia.org/wiki/Settlement_hierarchy

Nov 17, 2017 · icc97 on Tesla Semi
> If you check out the reveal, you'll see that it doesn't have those rediculous covers

In the reveal the first truck does have the covers over the wheels, the second one doesn't. Check the livestream at about 3:00 [0]

[0]: https://livestream.tesla.com/

Nov 17, 2017 · tim-- on Tesla Semi
The poor guys are self-DDoS'ing themselves.

They keep requesting https://livestream.tesla.com/haveReservationsStarted every 30 seconds. Most of the requests are failing with 502 HTTP errors.

un_montagnard
It's actually every 10 seconds.
flukus
> self-DDoS'ing themselves.

Should have let the Department of Redundancy Department do their capacity planning.

None
None
None
None
acchow
Why are they hosting their own streaming service instead of using YouTube?
un_montagnard
They are not. They are using ustream. But they are ddosing the server hosting the page with the player.
azhenley
Elon plans to go to Mars, surely he can handle video streaming...
muthdra
He clearly couldn't handle the slides presentation in a SpaceX video I watched sometime ago. The irony made me chuckle.
mandeepj
These are two very different things
username223
Yes. One requires attention to a huge number of details, to be certain that many systems work together flawlessly. The other, you can just reload if it screws up.
sangnoir
I don't think you can reload a rocket ;)
aphextron
It's Drupal...
xiphias
That's the only thing that never works when the first stage is landing on the ship
PakG1
YouTube may not work on Mars, they need to have this as a core competency, develop it now.
None
None
nodesocket
Looks like they using Akamai.

a23-56-119-116.deploy.static.akamaitechnologies.com

Fail, using an old version of NGINX 1.10.2 and no HTTP/2 support enabled.

arcbyte
Keep that HTTP2 garbage away!
nodesocket
What did HTTP2 hold you and your family at gunpoint?
tim--
It's probably Tesla running nginx, not akamai.
jakebasile
I really wish they'd just use Twitch. Twitch can handle tens of thousands of viewers on esports matches. I'm sure it can handle some thousands of nerds wanting to look at trucks.
karmapolic
shitty tools team.
interfixus
More people need to start hosting their own streaming service instead of automatically relying on Google Almighty to take care of life for them.

Kudos to Tesla for trying.

rtpg
Why though?

I need to stream something. I'm a car company. What's the point of making extra work for myself?

Greater control, perhaps, but for a press conference do I need this?

interfixus
Tesla is not a car company.

No, you do not need it for a press conference. But you earn credit points from me when I see you trying. Google is evil, remember?

nikofeyn
> Tesla is not a car company.

what does this even mean?

interfixus
I did emphasize the 'a', right? They are not a car company; not just any car company; not like any other car company on Earth (or Mars, come the day). Thank goodness, this shows in more or less all aspects of their marketing and PR.

Also, as someone says below, they're a battery company.

sangnoir
Does being a battery company make them more adept at streaming? gp's point was that streaming is not a Tesla core competency, not that they are particularly bad at it because they are a car company.
et1337
Tesla is a battery company.
nikofeyn
doesn’t panasonic make tesla’s batteries? even if tesla does make their own batteries, their primary product is automobiles. are other car makers engine companies? (of course some car companies do sell their engines to other car companies, but i think the point i am making is still clear.)
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.