sleepydoc
Well-Known Member
I‘ve corrected things on open street maps but Tesla still gets them wrong (the correction was 18 months ago and was formally approved by OSM)The FSD team said (at first AI day?) that radar was too noisy/ too low res for proper fusion with vision. Imagine a passenger who yells at every other overpass " Brake! Hard! There's a car right in front of us!" There was too much disagreement between the sensors and as vision improved, radar became a distraction instead of a supplement. A better radar may help in low visibility scenarios to become superhuman. But to get on par, proper vision plus "memory" should do. They already have short term memory (occluded obstacles are remembered) and map data plays the role of long term memory. Only, it's often wrong. I believe they still use OSM (Open Street Map) for navigation. As it's open source, everyone can contribute, e.g. here.
I’ve also seen the fusion argument made before and I’ve also seen people say that it’s not an issue, especially with newer systems so it’s hard to know what to believe. Is it because Tesla had an old, poor resolution system, because they hired engineers an programmers with expertise in image processing and not radar fusion, or because it actually can’t be done?