Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
The FSD team said (at first AI day?) that radar was too noisy/ too low res for proper fusion with vision. Imagine a passenger who yells at every other overpass " Brake! Hard! There's a car right in front of us!" There was too much disagreement between the sensors and as vision improved, radar became a distraction instead of a supplement. A better radar may help in low visibility scenarios to become superhuman. But to get on par, proper vision plus "memory" should do. They already have short term memory (occluded obstacles are remembered) and map data plays the role of long term memory. Only, it's often wrong. I believe they still use OSM (Open Street Map) for navigation. As it's open source, everyone can contribute, e.g. here.
I‘ve corrected things on open street maps but Tesla still gets them wrong (the correction was 18 months ago and was formally approved by OSM)

I’ve also seen the fusion argument made before and I’ve also seen people say that it’s not an issue, especially with newer systems so it’s hard to know what to believe. Is it because Tesla had an old, poor resolution system, because they hired engineers an programmers with expertise in image processing and not radar fusion, or because it actually can’t be done?
 
I‘ve corrected things on open street maps but Tesla still gets them wrong (the correction was 18 months ago and was formally approved by OSM)

I’ve also seen the fusion argument made before and I’ve also seen people say that it’s not an issue, especially with newer systems so it’s hard to know what to believe. Is it because Tesla had an old, poor resolution system, because they hired engineers an programmers with expertise in image processing and not radar fusion, or because it actually can’t be done?
You also need to update TomTom.

 
I‘ve corrected things on open street maps but Tesla still gets them wrong (the correction was 18 months ago and was formally approved by OSM)

I’ve also seen the fusion argument made before and I’ve also seen people say that it’s not an issue, especially with newer systems so it’s hard to know what to believe. Is it because Tesla had an old, poor resolution system, because they hired engineers an programmers with expertise in image processing and not radar fusion, or because it actually can’t be done?

Whatever their source is, it's a shame that they don't leverage their huge fleet of FSD-equipped cars to automatically correct it. I can only guess that it's not high on their priority list at this time.

When radar (or any other sensor) is frequently producing wrong signals, how do you decide when are the few cases where it's actually correct and should be trusted higher than the more reliable sensor? Since machine learning is all about statistics under the hood, it's a fundamental conundrum. A better quality radar would solve this. My take from the various reports is that perception with the existing vision stack is not the primary shortcoming any longer but the planner is the culprit in most interventions.
 
I cant find a exact quote at the moment, but Im almost positive Elon (years ago?) promised something along the lines, that Tesla cars will share various info with each other, for near real time benefit? I know the feature re: road surface detection to automatically adjust suspensions based on feedback from Teslas that traveled the road before you did, was (allegedly) rolled out this year, but other than that, havent heard a thing.
 
Last edited:
  • Like
Reactions: Ramphex
My take from the various reports is that perception with the existing vision stack is not the primary shortcoming any longer but the planner is the culprit in most interventions.
The planner has a lot of problems to be sure, but I wonder a lot about the noise/jitter and confidence of the perception, and how that impacts the planner and behavior.

HD Radar would definitely fill some holes and be a welcome addition, but also it would likely leave a lot of perception shortcomings unresolved.
 
The FSD team said (at first AI day?) that radar was too noisy/ too low res for proper fusion with vision. Imagine a passenger who yells at every other overpass " Brake! Hard! There's a car right in front of us!" There was too much disagreement between the sensors and as vision improved, radar became a distraction instead of a supplement. A better radar may help in low visibility scenarios to become superhuman. But to get on par, proper vision plus "memory" should do. They already have short term memory (occluded obstacles are remembered) and map data plays the role of long term memory. Only, it's often wrong. I believe they still use OSM (Open Street Map) for navigation. As it's open source, everyone can contribute, e.g. here.

I remember that. My take away was the TSLA AI team is something short of a reliable reference for radar capabilities. In one hand they talk about radar target/signal dropouts and the other they tout the brilliance of storing occluded objects to memory. What they don't say is radar processing can do the same thing. And the bridge issue is most likely poor antenna specs/design/test.

Whether acknowledged or not many have experienced vision only design shortcomings like occlusions, reflections, shadows, excess sunlight, night driving, weather, and my favorite - the brief loss of correlated raw image data from abrupt changes in vehicle yaw and/or vehicle porpoising over large road bumps, ....
 
the brief loss of correlated raw image data from abrupt changes in vehicle yaw and/or vehicle porpoising over large road bumps, ....
Unlike the other stuff in the list, not likely a vision-only shortcoming, since my eyes don’t have this problem. Maybe you need higher frame rate, but probably existing sensors can take care of these brief disturbances.

Reflections probably mostly a non-issue for vision only too, and can be used to enhance and extend perception in most cases. (Both through intentional and unintentional mirrors, to see what is coming.)
 
Last edited:
I cant find a exact quote at the moment, but Im almost positive Elon (years ago?) promised something along the lines, that Tesla cars will share various info with each other, for near real time benefit? I know the feature re: road surface detection to automatically adjust suspensions was (allegedly) rolled out this year, but other than that, havent heard a thing.
They demoed inference of road geometry and lane detection. I haven't seen pot holes but that's not too hard a task compared to other issues on their agenda. Depending on confidence of the detected feature and safety-relevance, corroboration from multiple cars may be required before it's considered ground truth.

Updating a map database would be the best way to share with the fleet, except for temporary / time critical information like "oil patch at the exit of the next curve" where direct vehicle to vehicle communication would make sense.
 
  • Like
Reactions: PACEMD
Unlike the other stuff in the list, not likely a vision-only shortcoming, since my eyes don’t have this problem. Maybe you need higher frame rate, but probably existing sensors can take care of these brief disturbances.

Reflections probably mostly a non-issue for vision only too, and can be used to enhance and extend perception in most cases. (Both through intentional and unintentional mirrors, to see what is coming.)

Yep. They are all marginal inherent issues. Each sensor design has its own issue to work though but also potentially can complement one another.
 
I think it’s cute how we talk about all of these cool things, concepts, ideas, big words, etc as if we are at a Tesla corporate meeting. In reality none of it matters. No matter what we say on here, Tesla is going to do what they do, and we’ll get whatever they give us.
You're kidding right? You mean Elon doesn't read my posts.
 
Boring video with less weaving all over the road (there are two clear instances).

But I said I would post it so I am. The exciting thing is we will have external drone cam this time, eventually. Will be another post from @Daniel in SD , who is helping validate FSD Beta externally, since he unwisely forgot to buy the revolutionary hardware for himself.

One intervention! This version has a tendency to abort on right turns from main roads onto residential/commercial minor side roads. Just the way it is.

By my count I would have intervened about 8 times if I had been driving normally (once every 45 seconds on average). So doing pretty well.

 
You also need to update TomTom.

Yup. Did that, too.
Whatever their source is, it's a shame that they don't leverage their huge fleet of FSD-equipped cars to automatically correct it. I can only guess that it's not high on their priority list at this time.

When radar (or any other sensor) is frequently producing wrong signals, how do you decide when are the few cases where it's actually correct and should be trusted higher than the more reliable sensor? Since machine learning is all about statistics under the hood, it's a fundamental conundrum. A better quality radar would solve this. My take from the various reports is that perception with the existing vision stack is not the primary shortcoming any longer but the planner is the culprit in most interventions.
Agreed. Think of all the money Google spends driving around to map and photograph streets. Tesla could get that for next to nothing. It seems like they could also have a system set up in which a discrepancy between the map database and the cameras would be flagged for review and/or automatically update the database.

Speed limits are a great example. If the database says the speed limit is 45 and 20 Teslas drive by and identify a sign saying the limit os 40 it could be easily corrected.
 
Unlike the other stuff in the list, not likely a vision-only shortcoming, since my eyes don’t have this problem. Maybe you need higher frame rate, but probably existing sensors can take care of these brief disturbances.
There are automatic neural feedback pathways in the brain that move your eyes to counter head movements.
 
The weather and roads are nice enough that I was able to use FSD for a full trip today and I noticed that the nags seem to be less frequent. instead of every 10 seconds they seem to be 30+ seconds. Has anyone else noticed this? I didn’t try looking away to check but maybe they’re actually using gaze detection to reduce nags now!