Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Level 3 might happen in the city before it happens at highway speeds where reaction times need to be shorter and the consequences of mistakes tend to be much higher because of the speeds involved.

Level 4-5 on highways, who knows when that will happen
Tesla and everyone else clearly feel the opposite. Outside of construction zones, the car barely nags in the interstate.

Highway is easier and that's why hands free is limited to it.
 
He promised it in January, but that didn't happen.

He never said if it would be situational, which it likely will be as the current nags are more frequent with difficult situations.

How much do you think a solid level 2 ADAS boost the stock for a company like Tesla, logically, regardless of reviews? Again, it's my opinion that licensing or driverless L4+ will be when we see a huge leap and without diving into that too much, I think we are years away. Deliveries, profits, additional products, and adoption are the biggest drivers for TSLA in the very near future. In the long run, I do see actual FSD being very good to us financially.
The only thing we disagree about is how much it will take for Wall Street to wake up.

We may disagree, but I like the tone of this this discussion.

Six months ago, a lot of folks on this board were still thinking it might never happen.
 
I'm not sure what people are expecting out of a hands-free urban Level 2 ADAS, I personally would not pay one red cent extra for that.

A Level 3 system that allows me to stop paying attention to the road for long haul highway driving, that's something I'd pay for, but I loathe to think about how long it'll be before a company takes responsibility for what vehicles are doing at highway speeds.
Level 3 will happen. Level 4 will happen. Level 5 will happen.

Once it is shown that V12 really works then it is just a matter of doing more and more training. The computing power for faster and faster training is coming on line as we speak.
 
Level 3 will happen. Level 4 will happen. Level 5 will happen.

Once it is shown that V12 really works then it is just a matter of doing more and more training. The computing power for faster and faster training is coming on line as we speak.
I think Tesla will choose to not go to level 3. Elon said he doesn't see value in it and that's likely the liability shift. Regardless if liability shift is required or announced, it would happen if someone took an accident to court.
 
I think Tesla will choose to not go to level 3. Elon said he doesn't see value in it and that's likely the liability shift. Regardless if liability shift is required or announced, it would happen if someone took an accident to court.
I don't see how you came to this conclusion. Tesla clearly wants robotaxis, which are level 4 or 5.

I don't remember Elon saying he doesn't see value in Level 3 autonomy. I do remember him saying that the level definitions are pointless.

But hey, that's just my memory. I'm too lazy to look it up.
 
  • Like
Reactions: Drumheller
I don't see how you came to this conclusion. Tesla clearly wants robotaxis, which are level 4 or 5.

I don't remember Elon saying he doesn't see value in Level 3 autonomy. I do remember him saying that the level definitions are pointless.

But hey, that's just my memory. I'm too lazy to look it up.
At level 3 the driver must take over when prompted. It's conditional autonomy for like traffic jams, but could be used for the highway.

In a JRE podcast he said he thought Level 3 was pointless and Tesla was focusing on full autonomy and would deliver it once it's ready.
 
I don't see how you came to this conclusion. Tesla clearly wants robotaxis, which are level 4 or 5.

I don't remember Elon saying he doesn't see value in Level 3 autonomy. I do remember him saying that the level definitions are pointless.

At level 3 the driver must take over when prompted. It's conditional autonomy for like traffic jams, but could be used for the highway.

In a JRE podcast he said he thought Level 3 was pointless and Tesla was focusing on full autonomy and would deliver it once it's ready.
I think he meant that the level definitions themselves were pointless.
 
That's not what he said. He was specifically talking about level 3. He's said multiple times that Tesla will have either level 4 or 5, so he obviously uses the levels.
L3 is perhaps the most useless level. Seems to be a liability transfer because the human is no longer required to monitored the driving, however it must be in very limited conditions. The only example they cited is "traffic jam chauffer".
 
  • Funny
Reactions: wipster
Where have I been? When did Gali change from Tesla biggest investor cheerleader (though his ideas of where Tesla is going is sometimes fanciful)?
For me it was a year or two ago when he posted a video where, in my eyes, it was pretty clear he yanked the wheel while driving in FSD, and pretended to save the day by yanking it back. Exclaiming "what was that?".
 
  • Like
Reactions: FreqFlyer
those who aren't Tesla followers don't know his rides are curated

Nonsense. Most of those videos are at regular speed with full view of the accelerator and pedal and steering. In those occasions he does intervene, he does make it a point to mention that.

It is possible he only showcases those rides that are good, and ignoring those that got him into trouble. It is possible he chooses routes that FSD can handle easily. But the ones that he has uploaded (probably around 30+ videos) are all legitimate.
 
Last edited:
Nonsense. Most of those videos are at regular speed with full view of the accelerator and pedal and steering. In those occasions he does intervene, he does make it a point to mention that.

It is possible he only showcases those rides that are good, and ignoring those that got him into trouble. It is possible he chooses routes that FSD can handle easily. But the ones that he has uploaded (probably around 30+ videos) are all legitimate.
You say nonsense, then "possibly" agree with me.

He doesn't upload bad rides, he hand picks routes, and uses his knee/canbus and now potentially Elon mode (claims he doesn't have it) to not get wheel nag presenting perfect hands-free driving. So he selects, organizes, and chooses rides to upload and misrepresents the actual experience of using FSD Beta.

There's a reason why every other YT makes little jokes about him on their rides and he's universally called a shill in the FSD Beta forums. His rides do not represent what FSD is...he's a hype tool.
 
You say nonsense, then "possibly" agree with me.

He doesn't upload bad rides, he hand picks routes, and uses his knee/canbus and now potentially Elon mode (claims he doesn't have it) to not get wheel nag presenting perfect hands-free driving. So he selects, organizes, and chooses rides to upload and misrepresents the actual experience of using FSD Beta.

There's a reason why every other YT makes little jokes about him on their rides and he's universally called a shill in the FSD Beta forums. His rides do not represent what FSD is...he's a hype tool.
Have he started doing this recently because I have watched most of his videos after every major update and there are plenty of them with disengagements. You have to watch the raw to catch them vs the speed up ones.

He once took a ride with Gali at san fran and the car went toward a bicyclists. Gali said in the video "should we cut this out?". Omar said no and that clip went viral, shared among TslaQ, ended up in montages and all those "investigative news report on FSD". Also many were saying how the Tesla cult purposely cut out bad parts because Gali was going to.

So I don't know..I think Omar is more honest than you think.
 
uses his knee/canbus and now potentially Elon mode (claims he doesn't have it) to not get wheel nag presenting perfect hands-free driving

Definitely not uses his knee. And he does get nags and he scrolls the volume wheel to cancel the nags. To claim one steers with the knee or even nudges when the camera catches all action clearly, is just plain ludicurous.
 
  • Funny
Reactions: uscbucsfan
Definitely not uses his knee. And he does get nags and he scrolls the volume wheel to cancel the nags. To claim one steers with the knee or even nudges when the camera catches all action clearly, is just ludicurous.
Just for fun, post this in the FSD 11.X thread and see what responses you get.

There are plenty of videos he's uploaded labeled "raw" where he doesn't touch the volume knob or the wheel once through an entire ride and then claims "FSD is almost finished" in the title...He's mocked because of this or his "race" with Waymo and Cruise. It sounds to me like you've bought the hype or don't actually have FSD, but again, try posting this there and see the response from the other testers.
 
I disagree. We see people like Omar touting this for years...those who aren't Tesla followers don't know his rides are curated. Also, there's no promise that v12 is fully hands free.

I don't think FSD leads to a large stock jump until it's licensed by someone else (confirmed) or there is actual driverless functionality. Regardless of what Elon, Omar, or others say or show about FSD now, it hasn't moved the needle for the stock much.
While progress on the binary scale has not moved from 0 to 1 yet, Tesla have been massively investing into FSD. So much compute is coming online and their large diverse and talented team of engineers are working hard at the problem. From AI day last year expect compute and data to both have grown 10x and efficiency some factor also.

The interesting thing with GPT2, GPT3, GPT4 etc is that it shows the increase in performance and capability just by scaling up compute, data and some minor refining. So as long as Tesla keeps throwing money, compute and data at the problem eventually capability will get there.
 
The interesting thing with GPT2, GPT3, GPT4 etc is that it shows the increase in performance and capability just by scaling up compute, data and some minor refining. So as long as Tesla keeps throwing money, compute and data at the problem eventually capability will get there.
I do hope so, in the sense that I hope there is no ceiling to be hit.

My largest concern with cameras only input, is distance perception. I'm not talking about "rough" distance perception when following a lane and staying behind a lead car X amount of metres, but precise distance perception down to the centimeter for parking.

Source: my 2023 MY without USS that uses "Tesla Vision" for distance estimation when parking. The distances are not accurate and jittery (i.e. they swing between different values without change of input).

@heltok , with your technical background, could you explain to me if accurate distance perception is:
A) possible? Only at slow speeds or also at higher speeds?
B) subject to major improvement with more training/compute?

Thanks in advance. If I could get a technical explanation why I need not worry about this, I'm in the camp of "Tesla will solve autonomy with current hardware".
 
@heltok , with your technical background, could you explain to me if accurate distance perception is:
A) possible? Only at slow speeds or also at higher speeds?
B) subject to major improvement with more training/compute?

Thanks in advance. If I could get a technical explanation why I need not worry about this, I'm in the camp of "Tesla will solve autonomy with current hardware".
First it should be noted that sonar is far from "accurate". Lidar is a lot better, with Lidar you are down to cm level precision from the measurements, then from the measurements you might need to estimate objects with slightly worse accuracy, but it's very good.

With cameras it depends on the lens and the number of measurements(frames, cameras etc). Not sure how good it is to be honest, I would guess the error is on the order of a few percent. A neural network will in theory be able to get the most of the videos. Imagine that you have a large team of experts who look at the video and have billions of other videos and can take whatever time they want to calculate the distance. Tesla should be very good at this by now.

If you have multiple cameras seeing the same thing or the same camera seeing it from multiple poses it is very easy to do, don't even need neural networks, some basic algebra will do it once you have extracted the features. I have done it, but it was a while ago hehe. Here is a video showing one guy doing it who manages to get to centimeter level precision(on 20-30cm level distances hehe):

For longer distances it's not really a big deal imo. Cameras are fine. We humans cannot estimate to meter level either and we can drive.

The problem is what the cameras cannot see. With video and memory the system can remember what it saw before it got to the current position which helps a lot. But if a cat has moved where the camera cannot see then the car is blind. Humans are also pretty blind to object below the front bumper when we drive which doesn't seem to stop us.

If Tesla adds a front bumper camera this issue is solved imo.
 
Last edited:
  • Helpful
Reactions: jeewee3000