HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Tesla AI Day 2021

Tesla · Youtube · 71 HN points · 23 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Tesla's video "Tesla AI Day 2021".
Youtube Summary
Join us to build the future of AI → https://www.tesla.com/ai
0:00 - Pre-event
46:54 - AI Day Begins
48:44 - Tesla Vision
1:13:12 - Planning and Control
1:24:35 - Manual Labeling
1:28:11 - Auto Labeling
1:35:15 - Simulation
1:42:10 - Hardware Integration
1:45:40 - Dojo
2:05:14 - Tesla Bot
2:12:59 - Q&A
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
The Tesla AI day videos [1] go into some detail about this. They use multiple networks that are dedicated to specific tasks.

[1]https://www.youtube.com/watch?v=j0z4FweCy4M (2021), https://www.youtube.com/watch?v=ODSJsviD_SU (2022)

Waymo has superior performance based on their historical statistics. It makes sense, since their lidar sensors capture more of the environment, and directly in 3D. Their AI also seems better QA’ed.

The Tesla AI Day[0] surprised me as it showed they only had a simple architecture for a very long time, simply feeding barely processed camera pixels to a DNN and hoping for the best with little more than supervised learning off human feeds. Their big claim to glory was that they rearchitectured it to produce a 2D map of the environment… which I thought they had years ago, and is still a far cry from the 3D modeling that is needed.

After all, sure, we humans only take two video feeds as input… But we can appreciate from it the position, intent, and trajectory of a wealth of elements around us, with reasonable probability estimates for multiple possibilities, sometimes pertaining to things that are invisible, such as kids crossing from nowhere when near a school.

Cruise also seems to have better tech; they had a barely-watched 2h30 description of their systems[1] which shows they do create a richer environment, evaluate the routing of many objects, and train their systems on a very realistic simulation, not just supervised training, which means it can learn from very low-probability events. They have a whole segment on including the probability that unseen cars may travel from perpendicular roads; Tesla’s creeping hit-or-miss are well-documented on Youtube.

[0]: https://www.youtube.com/watch?v=j0z4FweCy4M

[1]: https://www.youtube.com/watch?v=uJWN0K26NxQ

giantrobot
> After all, sure, we humans only take two video feeds as input…

No, we use both our perception and proprioception when we drive or walk for that matter. We have two accute visual sensors mounted in a housing with six degrees of freedom and a sensor feedback mechanism.

We also have fairly sensitive motion and momentum sensors all throughout our bodies. Additionally we have audio sensors feeding into our positional model.

All this data is fed into an advanced computing system that can trivially separate out sensor noise from useful signal. The computing system controls the vehicle, navigates, and even performs low priority background tasks like wondering if the front door of the house was locked.

We have dozens of integrated sensor inputs. It's just silly to assume we only use our eyes when driving. They're certainly an important sensor for driving but definitely not the only one.

steveBK123
All the vision-only maximalists also ignore how insufficient Tesla cameras are compared to your human eyes in terms of resolution, dynamic range and refresh rates.

If you ever review your Tesla Dashcam footage you'll see that you can rarely even make out a license plate. The cameras are not even HD, let alone UHD.

Further the refresh rate is a paltry 36fps.

Drive the car at high noon or night and you realize how poor the dynamic range is with highlights and/or shadows get lost.

https://nitishpuri.github.io/posts/books/a-visual-proof-that...

This is a rather intuitive mathematical proof that neural nets can emulate all functions in many dimensions. Knowing integrals (Calc 102) would help.

With this information it’s either that self-driving is eventually possible or human brains are not mathematically representable (i.e. something akin to a soul).

Meanwhile, the data collection/storage is getting better (all Teslas are continually testing FSD and reporting failure cases, synthetic generation of “good and unique” perfectly labeled data[0] is only a year or two “old”). Sensors/cameras are improving (and have a lot more room to improve). The compute hardware is getting better and cheaper (analog computers[1], TSMC 2nm and beyond, Apple/Google/Teslas neural cores, etc).

Additionally, given Waymo is already in production without humans in PHX it’s arguable that we already have “autonomous vehicles”. We now just need them to improve slightly and for Waymo or similar to test in the top 25 cities in the US. Which is mostly a matter of capital and time not technology limits. Maybe weather will be an issue for a few more years/decades - but even if Waymo only operated in Summer, I’d be happy to reduce my Lyft costs - and with enough data and maybe better sensors the neural nets will be able to handle weather too. Especially if the car just goes slow.

[0] https://youtu.be/j0z4FweCy4M

[1] https://youtu.be/GVsUOuSjvcg

[2] https://www.cnbc.com/2022/01/08/heres-what-it-was-like-to-ri...

simion314
>This is a rather intuitive mathematical proof that neural nets can emulate all functions in many dimensions.

Yes, but the problem is we don't know what function we want to approximate. So it is like interpolation you give it some points and you get an approximation that is good enough around those points but probably very wrong if you are far away of the points.

We don't even know how many layers and neurons are needed so it seems to be a brute force approach, throw more data and more hardware and when stuff goes wrong maybe throw more data. But one day a Tesla might go on a different environment and maybe people have different hats and goes to shit because those hats were not in the training data, we don't have like in other sciences bounded errors.

They use three forward facing cameras, actually. And they do get a 3D representation.

https://mobile.twitter.com/sendmcjak/status/1412607475879137...

https://youtu.be/j0z4FweCy4M?t=3780

eightails
Sure, but they're not getting that 3d map from binocular vision. The forward camera sensors are within a few mm of each other and different focal lengths.

And the tweet thread you linked confirms it's a ML depth map:

> Well, the cars actually have a depth perceiving net inside indeed.

My speculation was that a binocular system might be less prone to error than the current net.

leobg
Sure. You're suggesting that Tesla could get depth perception by placing two identical cameras several inches apart from each other, with an overlapping field of view.

I'm just wondering if using cameras that are close to each other, but use different focal lengths, doesn't give the same results.

It seems to me that this is how modern phones are doing background removal: The lenses are very close to each other, very unlike the human eye. But they have different focal lengths, so depth can be estimated based on the diff between the images caused by the different focal lengths.

Also, wouldn't turning a multitude of views into a 3D map require a neural net anyway?

Whether the images differ because of different focal lengths or because of different positions seems to be essentially the same training task. In both cases, the model needs to learn "This difference in those two images means this depth".

I think with the human eye, we do the same thing. That's why some optical illusions work that confuse your perception of which objects are in front and which are in the back.

And those illusions work even though humans actually have an advantage over cheap fixed-focus cameras, in that focusing the lens on the object itself gives an indication of the object's distance. Much like you could use a DSL as a measuring device by focusing on the object and then checking the distance markers on the lens' focus ring. Tesla doesn't have that advantage. They have to compare two "flat" images.

eightails
> I'm just wondering if using cameras that are close to each other, but use different focal lengths, doesn't give the same results

I can see why it might seem that way intuitively, but different focal lengths won't give any additional information about depth, just the potential for more detail. If no other parameters change, an increase in focal length is effectively the same as just cropping in from a wider FOV. Other things like depth of field will only change if e.g. the distance between the subject and camera are changed as well.

The additional depth information provided by binocular vision comes from parallax [0].

> Also, wouldn't turning a multitude of views into a 3D map require a neural net anyway?

Not necessarily, you can just use geometry [1]. Stereo vision algorithms have been around since the 80s or earlier [2]. That said, machine learning also works and is probably much faster. Either way the results should in theory be superior to monocular depth perception through ML, since additional information is being provided.

> It seems to me that this is how modern phones are doing background removal: The lenses are very close to each other, very unlike the human eye. But they have different focal lengths, so depth can be estimated based on the diff between the images caused by the different focal lengths.

Like I said, there isn't any difference when changing focal length other than 'zooming'. There's no further depth information to get, except for a tiny parallax difference I suppose.

Emulation of background blur can certainly be done with just one camera through ML, and I assume this is the standard way of doing things although implementations probably vary. Some phones also use time-of-flight sensors, and Google uses a specialised kind of AF photosite to assist their single sensor -- again, taking advantage of parallax [3]. Unfortunately I don't think the Tesla sensors have any such PDAF pixels.

This is also why portrait modes often get small things wrong, and don't blur certain objects (e.g. hair) properly. Obviously such mistakes are acceptable in a phone camera, less so in an autonomous car.

> And those illusions work even though humans actually have an advantage over cheap fixed-focus cameras, in that focusing the lens on the object itself gives an indication of the object's distance

If you're referring to differences in depth of field when comparing a near vs far focus plane, yeah that information certainly can be used to aid depth perception. Panasonic does this with their DFD (depth-from-defocus) system [4]. As you say though, not practical for Tesla cameras.

[0] https://en.wikipedia.org/wiki/Binocular_disparity [1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36... [2] https://www.ri.cmu.edu/pub_files/pub3/lucas_bruce_d_1981_2/l... [3] https://ai.googleblog.com/2017/10/portrait-mode-on-pixel-2-a... [4] https://www.dpreview.com/articles/0171197083/coming-into-foc...

leobg
Wow. Ok. I did not know that. I thought that there is depth information embedded in the diff between the images taken at different focal lengths.

I'm still wondering. As a photographer, you learn that you always want to use a focal length of 50mm+ for portraits. Otherwise, the face will look distorted. And even a non-photographer can often intuitively tell a professional photo from an iPhone selfie. The wider angle of the iPhone selfie lens changes the geometry of the face. It is very subtle. But if you took both images and overlayed them, you see that there are differences.

But, of course, I'm overlooking something here. Because if you take the same portrait at 50mm and with, say, 20mm, it's not just the focal length of the camera that differs. What also differs is the position of each camera. The 50mm camera will be positioned further away from the subject, whereas the 20mm camera has to be positioned much closer to achieve the same "shot".

So while there are differences in the geometry of the picture, these are there not because of the difference in the lenses being used, but because of the difference in the camera-subject distance.

So now I'm wondering, too, why Tesla decided against stereo vision.

It does seem, though, that they are getting that depth information through other means:

Tesla 3D point cloud: https://www.youtube.com/watch?v=YKtCD7F0Ih4

Tesla 3D depth perception: https://twitter.com/sendmcjak/status/1412607475879137280?s=6...

Tesla 3D scene reconstruction: https://twitter.com/tesla/status/1120815737654767616

Perhaps it helps that the vehicle moves? That is, after all, very close to having the same scene photographed by cameras positioned at different distances. Only that Tesla uses the same camera, but has it moving.

Also, among the front-facing cameras, the two outermost are at least a few centimeters apart. I haven't measured it, but it looks like a distance not unlike between a human's eyes [0]. Maybe that's already enough?

[0] https://www.notateslaapp.com/images/news/2022/camera-housing...

eightails
> But, of course, I'm overlooking something here. Because if you take the same portrait at 50mm and with, say, 20mm, it's not just the focal length of the camera that differs. What also differs is the position of each camera. The 50mm camera will be positioned further away from the subject, whereas the 20mm camera has to be positioned much closer to achieve the same "shot".

Yep, totally.

> Perhaps it helps that the vehicle moves? That is, after all, very close to having the same scene photographed by cameras positioned at different distances.

I think you're right, they must be taking advantage of this to get the kind of results they are getting. That point cloud footage is impressive, it's hard to imagine getting that kind of detail and accuracy just from individual 2d stills.

Maybe this also gives some insight into the situations where the system seems to struggle. When moving forward in a straight line, objects in the peripheral will shift noticeably in relative size, position and orientation within the frame, whereas objects directly in front will only change in size, not position or orientation. You can see this effect just by moving your head back and forth.

So it might be that the net has less information to go on when considering objects stationary directly in or slightly adjacent to the vehicles path -- which seems to be one of the scenarios where it makes mistakes in the real world, e.g. with stationary emergency vehicles. I'm just speculating here though.

> Also, among the front-facing cameras, the two outermost are at least a few centimeters apart. I haven't measured it, but it looks like a distance not unlike between a human's eyes [0]. Maybe that's already enough?

Maybe. The distance between the cameras is pretty small from memory, less than in human eyes I would say. It would also only work over a smaller section of the forward view due to the difference in focal length between the cams. I can't help but think that if they really wanted to take advantage of binocular vision, they would have used more optimal hardware. So I guess that implies that the engineers are confident that what they have should be sufficient, one way or another.

bumby
>different focal lengths won't give any additional information about depth, just the potential for more detail.

This is also why some people will optimize each eye for different focal length when getting laser eye surgery. When your lens is too stiff from age, it won't provide any additional depth perception but will give you more detail at different distances.

Cruise actually consider both social dynamics and uncertainty (i.e. what can hide behind an obstacle, or where are pedestrians/bikes/cars likely to move to).

If you are interested in self-driving cars, I can highly recommend their presentation from November 2021:

https://youtu.be/uJWN0K26NxQ?t=1467

For me it felt more convincing than Tesla's (a few months prior);

https://www.youtube.com/watch?v=j0z4FweCy4M

ddalex
Oh I haven't have heard about Cruise up until now. Will follow them, thank you
> The "images" are part information.

This. It is a mashup of both. I can clearly visualize many 3D objects in my mind, as well as zoom, pan, scale, rotate, recolor, animate, distort them, etc. But it is not purely visual, because I often "see" opposite side of a cube momentarily, which I should not be able to from this perspective.

So, it is not a rendering engine. It is not physically, spatially and temporally stable. Stability is lost especially as your focus shifts (e.g. "seeing" behind objects as your attention slips there)

The best way I know to describe this is by old and new Tesla FSD predictions which are sampled here: https://youtu.be/j0z4FweCy4M?t=3768

Improved version is how people talk about visualization, but the old unstable version is how (at least for me) it mostly is.

Here's Tesla's AI day presentation on how they are going to use AI in their planning system because traditional methods aren't scalable enough: https://www.youtube.com/watch?v=j0z4FweCy4M&t=4392s (1:20:15). You can see hints of this in presentations from the other self-driving companies.

Boston Dynamics was definitely an exception for a long time. Certainly the perception systems that they are adding use ML, but you are right that they use expressly use classical methods for their control system.

This is a poor use case for ML. You'd only have a couple hundred examples for a neighborhood with maybe two dozen variables. The model will also miss key variables that that there aren't datasets for but would be apparent upon a visit. ML only beats humans in very, very narrow scenarios.

https://youtu.be/j0z4FweCy4M?t=9319

I mean in the same event that you list, Elon literally opens with: "Tesla is much more than an automotive company, and we have lots of deep AI activity..." I think it is fair to say that a company that is literally inventing its own AI training supercomputer is probably an AI company as well :)

https://youtu.be/j0z4FweCy4M?t=2833

ctvo
Elon says a lot of things. Elon also brought out a person in a robot costume to dance. I stand by it takes giant leaps to think Tesla is an AI company at this stage and them collecting data doesn’t change that.
I believe Tesla uses something similar for training. https://youtu.be/j0z4FweCy4M?t=5716
For anyone interested the Monto Carlo Tree Search used for the planning (that the tentacle shows) is described here (from the AI day video, at the 1h21m50s):

https://youtu.be/j0z4FweCy4M?t=4910

The Q&A section on their compiler and software that the author links to is very interesting:

https://www.youtube.com/watch?v=j0z4FweCy4M&t=8047s

It sounds like they're going to have write a ton of custom software in order to use this hardware at scale. And, based on the team being speechless when asked a follow up question, it doesn't sound like they know (yet) how they're going to solve this.

Nvidia gets a lot of credit for their hardware advances, but what really what their chips work so well for deep learning was the huge software stack they created around CUDA.

Underestimating the software investment required has plagued a lot of AI chip startups. It doesn't sound like Tesla is immune to this.

> […] AI has a massive value to every aspect of the company's work.

That's also just wrong. During the recent "Tesla AI Day", when asked during Q/A, Elon Musk specifically mentioned that they intentionally use machine learning only for very few cases:

    Q: "Is Tesla using machine learning within its manufacturing, design
    or any other engineering processes?"
    
    Elon: "I discourage use of machine learning, because it's really
    difficult. Unless you have to use machine learning, don't do it. It's
    usually a red flag when somebody is saying 'We wanna use machine
    learning to solve this task'. I'm like: That sounds like bullshit.
    99.9% of the time you don't need it."
https://www.youtube.com/watch?v=j0z4FweCy4M&t=9307s
In Karpathy’s recent AI day presentation he specifically stated they use transformers.

But not on the raw camera input — they use regnets for that. The transformers come higher up the stack:

https://youtu.be/j0z4FweCy4M

Transformers mentioned on the slide at timestamp 1:00:18.

sillysaurusx
Thank you!
tandem5000
They use the key-value lookup/routing mechanism from Transformers to predict pixel-wise labels in bird view (lane, car, obstacle, intersection etc.). The motivation here is that some of the predictions may temporarily be occluded, so for predicting these occluded areas it may be particularly helpful to attend to remote regions in the input images which requires long-range dependencies that highly depend on the input itself (e.g. on whether there is an occlusion), which is exactly where the key-value mechanism excels. Not sure they even process past camera frames at this point. They only mention that later in the pipline they have an LSTM-like NN incorporating past camera frames (Schmidhuber will be proud!!).

Edit: A random observation which just occurred to me is that their predictions seem surprisingly temporally unstable. Observe, for example, the lane layout wildly changing while it drives makes a left-turn at the intersection (https://youtu.be/j0z4FweCy4M?t=2608). You can use the comma and period keys to step through the video frame-by-frame.

They do. Karpathy spoke about it in on Tesla AI day. They use it for transforming image space to a vector space.

See: https://youtu.be/j0z4FweCy4M (timestamp 54.40 onwards)

sillysaurusx
Can you provide a more specific timestamp? 54.40 doesn't seem to mention anything about transformers, and "onwards" is two hours.

I'd be really surprised if they use transformers due to how computationally expensive they are for anything involving vision.

EDIT: Found it. 1h: https://www.youtube.com/watch?v=j0z4FweCy4M?t=1h

Fascinating. I guess transformers are efficient.

dexter89_kp3
I gave the timestamp where they start talking about the problem they are trying to solve using transformers. As you said it is around 1hr mark
Tesla is using maps too. Source: the recent Tesla AI day https://m.youtube.com/watch?v=j0z4FweCy4M
Aug 23, 2021 · gpm on John Carmack visits Starbase
Elon's been giving some lip service to closer-to-GI capabilities anyways, e.g. when describing how it would be optimal to be able to control Tesla's humanoid robot by saying "please go to the store and get me the following groceries"...

https://youtu.be/j0z4FweCy4M?t=7775

Whether Tesla wants to fund serious work towards that may be a different question (but they do seem to be serious about at least building the robot).

Aug 20, 2021 · 2 points, 0 comments · submitted by lukateake
Aug 20, 2021 · 64 points, 63 comments · submitted by jonbaer
rklaehn
Absolutely blown away by dojo and their chip design. It seems that they are on the way to building the most powerful machine learning supercomputer in the world in house.

Where do these chips get made? TSMC?

I would love to know how their new chip compares with the FSD chip in terms of power usage per teraflop.

Also, it seems that what they call "vector space" is really a 2d plane, so how do they deal with 3d scenarios such as parking garages and complex overpasses. They are rare, but they do exist.

The news will of course be all about the humanoid robot and the somewhat silly dancer, giving ammo to the detractors...

raghavkhanna
That was my question too. It seems they have a deal with Samsung, atleast for the FSD computer chips[1].

[1] https://electrek.co/2021/05/03/how-tesla-pivoted-avoid-globa....

jeffbee
Blown away by a prototype clamped to a bench, eh? If this thing existed they'd have MLPerf benchmarks to show.
sidibe
Yup, stick around for the QA. During the presentation it sounded like it was practically ready. Someone asked about memory locality when actually putting them together and the infra guy started saying it's a really hard problem. Finally Elon said "This will be ready next year"

The whole presentation (not just dojo) felt like the kind of presentation you give internally to a VP when you haven't been making much progress. Flood them with challenges and implementation detail decisions they can't follow and don't talk about results.

jeffbee
Other than being hypothetical vaporware, how is it different from a cabinet full of google TPUs?
dragontamer
> The whole presentation felt like the kind of presentation you give internally to a VP when you haven't been making much progress. Flood them with challenges and implementation detail decisions they can't follow and don't talk about results.

That hurts me a bit, because I feel like I've done that sometimes. But that's what makes it easy for me to pick up on, when other people do that sort of stuff.

Good presentations are more direct. Ex: Here's our ResNet-50 benchmark numbers. Or at very least, "We didn't run ResNet50, because we don't think its realistic. Instead, we ran this other benchmark along with NVidia A100 GPUs and are proving that our system is faster "

----------

You need to prove your system has a benchmark score worth considering before diving into architectural details... at least if you're trying to attract engineering talent to your project.

cogman10
ditto. Was definitely the most impressive part of the presentation (IMO).

Also, did I catch it right? One of those plates is 15kW of power? That's some crazy heat dissipation!

mchusma
Random thoughts:

I thought the fact they have 1,000 data labelers was interesting.

I'm unsure about their dojo efforts. Seems cool, seems better than Nvidia but not 10x better (hard to tell it's not apples to apples). For example their dojo cluster us supposed to have over an exaflop vs a A100 superpod at 700pb (seems like I/O is over 2x better which I understand conceptually why it matters but not real world implications). By the time this gets production ready (next year) seems possible Nvidia will have a replacement that is about as good or better.

Network architecture was not using video before, it was using 2-3 frames, which seemed obviously not going to work so seems better now.

The updated network architecture plus the fact they need to train with a supercomputer to me means they will almost certainly need a new inference chip. They didn't announce a v3 of their FSD computer, but that is probably the physical component blocking FSD rollout.

My take on timeline for actual compete everywhere self driving (robotaxi) seems to be: -dojo isn't even in production until next year, so no earlier than that. -FSD chip v3 probably not going to be design finalized until they use some dojo models in it. That plus chip shortages means it's unlikely to happen at scale until 2023.

So I think this presentation was to make it clear the large about of progress they are making (very impressive), but IMO this was also a subtle way of making 2023 the new target FSD date.

The alternative is they don't think they need dojo to get FSD, and if that is the case why bother building a fully integrated chip manufacturing process?

dawnerd
V3 is what’s currently shipping right now. Elon called it FSD 1 and said FSD 2 might come with Cybertruck.
_coveredInBees
They are already deploying the kinds of models talked about here in their FSD beta software (along with regular FSD/AP software) and they run just fine on current FSD hardware in the cars with acceptable latency. Just because you need a monstrous supercomputer to train these models doesn't mean you cannot run inference on them on much more light-weight devices. Some reasons being:

1. Training requires them to make use of ridiculous amount of data which requires large GPU compute + RAM and super high bandwidth to keep the GPUs fed at all times. Backpropagation requires a LOT of bookkeeping that adds on a lot of RAM requirement at train-time.

2. Inference is a lot easier. You just need to worry about keeping up with your 8 cameras and running them through your network. You also aren't calculating any gradients and dealing with needing to do any of the bookkeeping for backpropagation so you need a LOT less RAM

3. There are a ton of tricks you can do to make a trained network a lot more efficient for inference such as lower precision models (int8 or in their case, CFP8), network pruning, compiler tricks, etc.

It is quite impressive that they've managed to fit all this into their current V3 hardware. That being said, they are definitely starting to really hit the limits of V3 and in the long-run will likely need to upgrade the FSD hardware on newer cars. They are probably trying to avoid having to provide a retrofit for all cars in the fleet as that would be prohibitively expensive.

bko
The newly announced Tesla humanoid bot is around 2:05:11. Expected prototype in 2022.

The demo was an obvious troll but it looks pretty exciting. I'd rather have resources put into actual innovation than learning how to increase the likelihood someone will click a button by 0.01%

https://youtu.be/j0z4FweCy4M?t=7511

hellbannedguy
I had the tv on last night, and woke up thinking Musk made a huge advancement in robotics.
maxdo
I've been watching their presentations from time to time. The progress they maid within 1-2 years is incredible. They clearly focused a lot and betting a lot on their approach. Wouldn't be surprised if they'll become a real waymo competitor in 1.5 year. Saying"competitor" I mean real application of robotaxi but with major some issues. Waymo's approach still failing a nose cones on the road for example.
AppleCandy
How does Tesla's tech compare with Waymo?

I just can't imagine how Waymo or any other company can compete with the first-principles full stack engineering and scaling of Tesla, which touches virtually every important dimension of consideration. And that is today, not even a year from now.

The foundation Tesla has built and shown on AI day will progressively increase their lead in the years to come, I just don't see how anyone can compete with that level of forward thinking. To me it just seems obvious, so what am I missing about Waymo?

raisedbyninjas
Waymo and everybody else are only trying to solve one AI problem. Tesla wanted to tackle two at the same time. Maybe they can do it, but its just an extra hurdle that others are stepping around. Also Waymo has had level 4 deployed for years.
AtlasBarfed
In geofenced customizer non consumer cars?

Tesla/musk self driving claims require extensive independent verification, as do the other fsd wannabes (they are all wannabes right now).

I still would like primary focus on highway driving and safety. That is the big efficiency win for logistics and slot easier than urban streets

criticaltinker
I've been told the consensus on HN is that Waymo is far ahead of Tesla in terms of tech. See this post & discussion from yesterday: https://news.ycombinator.com/item?id=28236964
ckastner
To think that on Autonomy Day in 2019, Musk promised that Tesla will "have over a million robo-taxis on the road" in 2020 [1].

Musk is a visionary that has delivered amazing promises. That doesn't mean all of his promises are amazing. Some are quite clearly BS (for example, I don't think robotaxis are impossible in general, but announcing a million of them on a 15-month time frame absolutely was).

[1] eg: https://www.engadget.com/2019-04-22-tesla-elon-musk-self-dri...

loceng
That could have literally been the command from above - "I want over a million robo-taxis on the road" - and then that trickles down into the company.
simiones
Musk is a fraudster who has consistently lied to investors and the public about timelines and features - FSD, SolarWinds solar tiles, the robotaxis, "founding" Tesla, "founding" PayPal, the Vegas "loop", cheaper tunnels, hyperloop, NeuraLink brain-computer videogames, flying cars, the cave submarine, turbines on cars, "more efficient" batteries and on and on and on. There are a few lies that have not yet been proven lies - the 40k satellites or the Mars settlement or the $2M rocket, for example - but I have faith that they will also prove to be lies in time.

His only contributions have been early investment in Tesla and hiring the right people for SpaceX to achieve some decent cost savings and vertical landing on Earth. It's definitely not nothing, but it really shouldn't make anyone believe 99% of what he says, or consider him some kind of genius.

polytely
Starship seems really promising, though, making amazing progress in record time. I'm really optimistic about SpaceX, that whole company seems to be aligned toward one goal only, getting people on mars, if I had any expertise in that area I would sign up in a second.
6d6b73
First "starship" launch will end with Big Bada Boom and will end SpaceX.
polytely
From what I've seen they think its like a 50/50 chance of the first starship blowing up, but they are already building the next couple ones, so they are ready for that. They are not just building one rocket, they are building a rocket assembly line and it seems like they have cracked the code on that. Not a chance one explosion will end spaceX, I'm not sure how you could seriously think that if you have been following the development.
6d6b73
Starship explosion could damage/destroy large part of the "starbase" and cause significant damage in Bocca Chica. It will also have terrible impact on the wildlife refuge that's there. In addition to that Musk's Starlink satellite will cause Kessler Syndrome. Both of these events could not only finish Spacex but will affect space exploration for decades.
sudhirj
So there's already been 10 or more of them that exploded, no? You mean the first crewed launch? That'll probably the 100th actual launch, with the tens of failures in the beginning and a streak of successes that's convincing enough to put people on. Had the same problem with Falcon as well, but we've already forgotten. All the initial rockets exploded, but now we've just gotten used to them landing perfectly.
6d6b73
Starship as defined on spacex.com: SpaceX’s Starship spacecraft and Super Heavy rocket (collectively referred to as Starship). So far there were no flights with both SN and SH as one ship.
simiones
Mars is a fools' goal, only really interesting as a point of pride. There will very obviously not be any kind of Mars "colony" in our lifetimes, and there are good reasons to believe that none will ever exist - there are too little resources on Mars, and too harsh conditions, for it to make any kind of sense.

Apart from that, there are chances that Spaceship will be a good product, probably making it cheaper to put stuff in orbit than it is today. On the other hand, this may wildly accelerate problems with space junk, especially if the irresponsible plans for putting more than twice the amount of objects in LEO are allowed to go through.

borkyborkbork
You seem so sure of yourself.
simiones
I'm not sure of myself, I'm quite sure of our collective inability to do anything as complex (and expensive and wasteful of resources) as a Mars colony. Of course, time will tell.
skissane
> There will very obviously not be any kind of Mars "colony" in our lifetimes

Define "Mars colony". A thriving metropolis with millions of residents? Or a permanently crewed research station with a few dozen staff? The former is not going to happen in our lifetimes. The later isn't certain but I think it has some chance of happening. But Musk is going to try to set up the later and call it a "Mars colony", with the declared ambition of eventually developing it into the former. Aspirational naming.

> there are too little resources on Mars, and too harsh conditions, for it to make any kind of sense.

In 2021, Musk has a net worth of $170 billion. He's only 50, he could easily live for another 30 or 40 years. It is entirely plausible that by the time he dies, he will be a multi-trillionaire. To whom, or to what, is he going to leave his trillions? (Heck, forty years from now, quadrillionaires could even be a thing, and old man Musk could be one of them.) He might leave them to a charitable foundation dedicated to paying for the colonisation of Mars. If that happens – well, even if you are right that colonising Mars is pointless, if a multi-trillion, or even multi-quadrillion, dollar foundation is willing to pay for it, it may actually happen, no matter how pointless it may be.

Another possible scenario: sometime in the next few decades, the US successfully lands on Mars, and sets up a modestly sized US-led international research base. The US government continues its policy of excluding China, so China responds by landing on Mars independently and establishing their own competing Mars base. China and the US then become trapped in a new space race, in which each seeks to outdo the other with the size and capabilities of their respective Mars bases, not motivated by any practical economic or military benefits, but simply by national pride and prestige. A US-led Mars base could grow into thousands of people, and US Congress might become politically locked into paying for them out of the federal budget, even if their existence doesn't actually benefit the average US citizen in any tangible way.

(The above two scenarios are not mutually exclusive, it is possible both might happen at the same time.)

2OEH8eoCRo0
"It's not a lie if you believe it." - George Costanza
bob33212
I'm assuming your Twitter feed has become full of information supporting the idea the Elon is a fraud. You are falling into the same feedback loop that people who think Trump really won and the people who think that White people should apologize for existing.

You need to create new social media accounts and try to look for information supporting the opposite idea. I'm not suggesting that Elon is totally honest about everything he says, but there is more to him than what you seem to have heard.

I'm saying this to help you, because, you have been misinformed by these feedback loop algorithms. If you keep your identity tied to this idea that Elon is a total fraud you are going to become more and more upset overtime as the evidence to the contrary accumulates.

simiones
Thank you for your concern, but I have read plenty about his successes or failures to make up my mind. Much of this has been by virtue of listening to him directly, and looking at the ways he intentionally manipulates opinion of himself.
bob33212
OK, well best of luck.
AtlasBarfed
I thought shorter astroturfing was a 2020 thing
flixic
He tends to share his ideas publicly before they are "verified with reality", which leads to a lot of them being late, over-promised under-delivered, etc.

But if you take his statements as "thinking aloud", I much prefer this mode of operation. On the other end of the spectrum is Apple, with zero "thinking aloud" or sharing _anything_ until it's ready to ship. It's ok... but much less fun.

nullifidian
>It's ok...

>thinking aloud

He's a CEO and is legally bound to not lie or mislead the investors though. For example Skreli is in part serving a federal sentence for lying to the investors who didn't lose any money.

hellbannedguy
Skrelli raising the price of that medication was legal. (I know you know that.)

We still have no laws preventing another greedy person doing the same.

They went after him for PR purposes.

Personally, I wish they left him alone, and instead changed drug law that will prevent these huge price hikes in the future.

In a weird way, lonely Hoody Boy exposed a huge flaw in the system, and we should be thanking him?

ckastner
"One million robotaxis in 15 months" is not thinking aloud, it's a very concrete prognosis.
flixic
What I mean to say is that you should take what he's saying as "talking aloud" and not as any concrete prognosis, even if he makes it sound that way. Then it's fun.
RivieraKid
This is a very naive take. There are several examples of him being intentionally dishonest, e.g. the funding secured thing. He's a sociopathic narcissist.
simiones
These are not random ideas that he throws around, these are all examples of products he has held massive press conferences for, announcing them with fanfare, and referencing them again and again in public speeches. He really does seem to believe in some of them, even some that they are quite obviously impossible - casting serious doubts on his intelligence or understanding of science (which is not surprising, given he only has an undergraduate degree in Physics as science credentials).
cogman10
Yeah, I don't think it's possible with the current hardware. The best Tesla can hope for is Level 3, mainly due to the lack of redundancy.
aantix
What’s the biggest advancement/feature, in your opinion?
6d6b73
You mean biggest fraud? Tesla Bot of course
artemonster
I cant believe people are actually „buying“ this. Queue in all pop „tech“ outlets blowing this up. After this goes bust next AI winter will be upon us.
cogman10
Dojo. Because it's real and pretty impressive. It shows Tesla has some pretty good hardware chops.
sremani
There is a wow factor in the presentation, especially how many of the things they are making from ground-up and in-house, at times making me feel like, may be they want to be AI company now.

Definitely the TSLA stock price increase is being reflected into the massive investments into AI.

avelis
Stock price will likely not go up the day after. Analysts have to break down the implication in terms of potential revenue models that Tesla could gain for it. From there we can establish a future price target based on potential revenue. For now the market wants to focus on revenue today, that is mostly sold products today and near term products soon to be available.
sremani
What I meant was, given the meteroic rise in the past years increase in valuation - Tesla took a lot of capital and invested in this division.
tenpies
> Definitely the TSLA stock price increase is being reflected into the massive investments into AI.

Actually the best indicator of stock price increase has been if an unknown buyer comes in an buys huge amounts of OTM calls expiring that week with the hope of causing a gamma squeeze.

Watch the option flow.

You'll note also that it just hasn't been the same since Archegos imploded for example, but there is definitely another smaller buyer doing the same. It's almost certainly a group effort given the amount of money at play.

bjornlouser
"... the car can be thought of as an animal..."
criticaltinker
Previous discussion from 13 hours ago: https://news.ycombinator.com/item?id=28240882
6d6b73
That discussion was already pushed off the first page, probably because too many people now see Musk for the fraud he is.

< Bracing for down votes from Tesla Bots > :)

cardosof
I see this opinion over and over again, but I don't know the reasons sustaining it, so I will ask: why is he a fraud? Are his companies fraudulent as well or just their iconic leader/founder?
simiones
Well, he has settled for securities fraud with the SEC for claiming over Twitter that he would take Tesla private; and he is currently being sued for literally defrauding Tesla investors in order to prop up his bad SolarWinds investment, including outright lies about the state of SolarWinds technology (he had a huge press conference before the acquisition vote about a series of houses tiled with SolarWinds solar tiles, that he knew at the time were not working).

He has also defrauded Tesla customers by claiming that the cars they were sold will have access to full self-driving in a future software update, which is obviously not going to happen (FSD has already been claimed to actually mean L3 self-driving, and even that is unlikely to be achievable without hardware modifications).

He has consistently come out before the public and claimed that he&his companies will achieve impossible things in a year or two, or presenting failed ancient tech ideas as future success ("hyperloop", or the 1850s vacuum train idea, which has never worked and will never work). I gave a list in another post, but I will copy it here again:

FSD, SolarWinds solar tiles, the robotaxis, "founding" Tesla, "founding" PayPal, the Vegas "loop", cheaper tunnels, hyperloop, NeuraLink brain-computer videogames, flying cars, the cave submarine, turbines on cars, "more efficient" batteries, Space Ship point-to-point, Tesla semi

mlindner
If you're going to lie about Tesla at least get company names right. Tesla has nothing to do with SolarWinds.
simiones
Yes, apologies, meant SolarCity. I'm curious which lie you think I told, apart from this mistake?
optimiz3
You can't use your examples of what hasn't been delivered yet without the things people said were impossible and have been delivered (reusable rockets, M3, etc.).

FWIW it's SolarCity, not SolarWinds. The SolarCity acquisition was also put to a shareholder vote. After the vote passed one had plenty of opportunity to sell their shares. Of course that would have been dumb given Tesla's and Musk's outstanding performance.

simiones
Oops, you're right about SolarCity, sorry about that. However, you're glossing over some important details regarding the vote: Musk was supposed to recuse himself from the whole process, given the obvious conflict of interest, and he did not.

Related to what has been delivered: reusable rockets are not entirely new, the Space Shuttle was a reusable vehicle that flew many times; various versions of reusable rockets have been tried since the 1990s, though none achieved orbit before SpaceX. The M3 is a car, not sure why anyone believed it couldn't be delivered. It was delayed to some extent though.

optimiz3
> given the obvious conflict of interest, and he did not

This is being litigated and there has been no finding of fact. Even if true, who cares? Tesla owes much of it's success to Musk's leadership.

> reusable rockets are not entirely new

Absurd position. Before SpaceX, reusable rockets were mostly concept and theory only. SpaceX made them practically viable and useful.

A quick search of HN will show countless comments and skeptics of the M3 all through it's progress, just as people are skeptical of FSD development now.

simiones
> This is being litigated and there has been no finding of fact. Even if true, who cares? Tesla owes much of it's success to Musk's leadership.

I can have an opinion on a civil case even before it has finished being litigated. And if he were to lose the litigation (it will almost certainly be settled out of court, but that's beside the point), I will feel entirely entitled to call him a fraudster, even if he's done a lot for Tesla.

> Absurd position. Before SpaceX, reusable rockets were mostly concept and theory only. SpaceX made them practically viable and useful.

Reusable stage 2 has existed for ~40 years, the Space shuttle. Reusable single-stage rockets with VTOL was in the advanced prototype stage in 1993-6, the McDonell-Douglas DCX and DC-XA, which flew up to 3.1km. In its initial version, it was successfully flown and recovered 7 times, up to 2.5km, even executing a successful abort after launch on one occasion. It was then upgraded and flew and was recovered an additional 3 times (reaching up to 3.1 km) until a fourth flight was not successfully recovered. For various reasons, no additional funding was provided.

I don't mean to say SpaceX did nothing innovative here, they achieved a great success and pushed human engineering forward. But it's also wrong to say that we went from 0/theory only to reusable rockets thanks entirely to SpaceX.

Regarding the Model 3, I believe skepticism was about timelines and potential popularity or production volume. I don't think you'll find anyone who didn't believe Tesla could come up with a third car at all. Of course, the popularity/production volume skeptics were proven entirely wrong. In contrast, FSD is a research-level project, with no reason to assume that it will be feasible in the next 100 years, nevermind 2 (Elon lies) or 10 (optimistic realists).

misiti3780
this sums it up from three weeks ago: https://news.ycombinator.com/item?id=27997438
dougmwne
People have been expressing this skepticism for many years and every time there's a groundbreaking accomplishment the goalposts get moved. At what point are we able to admit that he's the most accomplished CEO since Steve Jobs?

I personally thought the rocket reusability sounded far-fetched and doubted the Model 3 was going to be successful. I also thought there was no way they'd ever manage to get so many Starlink satellites in orbit before bankrupting themselves. Wrong, wrong and wrong again.

The self driving doesn't seem like it will ever work, the Starship sounds like a fantasy and the robot sounds like an episode of Star Trek. But I'm just some guy who has been wrong every time about what Elon's companies can do, so what does my opinion matter?

AtlasBarfed
I love posting the landing rocket to the fraudster astroturfers.
Aug 20, 2021 · 2 points, 0 comments · submitted by underscore_ku
Aug 20, 2021 · ipsum2 on Tesla AI Day [video]
I don't know if anyone else caught this, but their "optimal trajectory" planned path is illegally changing lanes in an intersection, at -1:26:00: https://www.youtube.com/watch?v=j0z4FweCy4M. Pretty funny.
vanshg
At least in California, it is not illegal to change lanes in an intersection, given that the maneuver does not pose any other risk
Cogito
I remember reading something about the legality of changing lanes in an intersection in different jurisdictions, and a quick google gives me this [0].

The consensus seems to be that it's legal in many places, however a dangerous lane change is always illegal and changing lanes in an intersection is almost always dangerous.

[0] https://www.reddit.com/r/NoStupidQuestions/comments/25kbg3/i...

Aug 19, 2021 · 3 points, 0 comments · submitted by martythemaniak
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.