Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Opinion: what I think Tesla should do to "solve" FSD

This site may earn commission on affiliate links.
First, I want to acknowledge all the hard work of the Tesla FSD team. Tesla has spent years building a sophisticated vision-only system. And the perception part is very advanced. I am not saying that Tesla Vision is perfect. There are still gaps in the perception system. But I feel like Tesla has build a pretty good foundation for FSD. I am not suggesting Tesla start from scratch. On the contrary, I think Tesla should continue to build on that vision-only foundation.

But here are 3 things that I think Tesla should do in order to deploy a more reliable and more robust FSD system.

TLDR Tesla should copy Mobileye.

1) Crowdsourced maps
Tesla has a big fleet of vehicles on the road. They could leverage the vision system in every car on the road to crowdsource detailed maps, similar to what Mobileye is doing. With such a large fleet of vehicles, Tesla could map large areas pretty quickly. Tesla could probably map every road in the US in a relatively short time. And with such a large fleet of vehicles on the road, Tesla could also update the maps pretty easily too since there would always be a Tesla somewhere checking the maps. A lot of the errors that FSD beta makes seem to be due to poor map data. Crowdsourcing could really help solve those issues since there would be a Tesla likely checking that spot pretty regularly. And detailed maps could help make FSD more robust. With crowdsourcing, only the first car would need to drive the road mapless, other cars that encounter the road later, would have the benefit of a map as a prior. And detailed maps can provide useful non-visual info like slowing down for a bend in the road that you can't see because of obstacles or preferred traffic speed different from actual speed limit.

2) Driving Policy
Tesla has done a lot of work with perception but one area where FSD Beta is very weak, is driving policy IMO. For example, FSD beta is poor at knowing when to change lanes when traffic is dense to avoid not missing an exit. FSD beta can wait too long and then miss its chance to merge because of dense traffic. Also, FSD beta can be overly cautious at intersections when there is no traffic at all. FSD beta can also be too shy when going from a stop sign or too aggressive when making unprotected left turns. These are issues that better driving policy would help with. It would improve the driving decisions of the car and make for a safer and smoother ride. Mobileye has a very good RSS policy that helps the car drive safely. So I think Tesla needs to focus more on driving policy. I think FSD Beta would benefit greatly from a driving policy.

3) Sensor redundancy
I think Tesla is smart to focus on vision-only. This is important as a foundation for perception. And I think vision-only will work great for L2 "door to door". So what I am proposing is that Tesla continue to do vision-only for L2 but work on a lidar-radar subsystem that could be added on top of the existing vision-only FSD system to provide extra reliability and redundancy, that could help get the system to "eyes off". This is essentially what Mobileye is doing and I think it is smart. I think vision-only is fine for L2 but having radar-lidar as a back-up is crucial for "eyes off". This because in order to do "eyes off", you really need to be able to trust the system to be super reliable in all conditions. Vision-only cannot do that. With vision-only, if the cameras fail, the entire system will fail or need to pull over. But with cameras, radar and lidar, your system is less likely to fail if the cameras fails. I think having extra sensors as back-up will really help to reach that extra reliability.

46071715365_d36a6e2bf4_b.jpg

"Full Self Driving Tesla" by rulenumberone2 is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
 
Last edited:
I can't wait for radar and lidar to see color. The problem with them now is that if cameras fail, radar and lidar can't read signs, see red/yellow/green lights, see flashing emergency lights, etc.

Can FSD Beta drive on city streets without the cameras?
Musk has repeatedly stated that lidar and radar are not necessary for fsd.
Tesla vision only uses cameras
 
  • Disagree
Reactions: beachmiles
The problem with them now is that if cameras fail, radar and lidar can't read signs, see red/yellow/green lights, see flashing emergency lights, etc.

The idea of adding radar and lidar is to improve the overall MTBF of the system. There will still be cases, like catastrophic failures, where the car will need to pull over. If the front cameras all shut down, the car would likely need to pull over. Radar and lidar will not eliminate all perception failures but they will reduce many perception failures. For example, radar and lidar can reduce failures like when vision gets the distance of an object wrong, or when vision gets the velocity of an object wrong, or when vision does not detect an object because the object is not in the training set.
 
Last edited:
  • Like
Reactions: Dewg and beachmiles
After all 9 years from thr first introduction of Autopilot in 2014, Autosteer can still abruptly veer off course.

Unintentional deceleration is still problematic. It could be caused by something as simple as mirage on a hot, straight, empty highway.

What needs to be done so Tesla pass the collision avoidance test from Dan O'Dowd?
 
I can't wait for radar and lidar to see color. The problem with them now is that if cameras fail, radar and lidar can't read signs, see red/yellow/green lights, see flashing emergency lights, etc.

Can FSD Beta drive on city streets without the cameras?
You are anthropomorphizing color since it is our brains assigning a "look" to certain/narrow band of the electromagnetic radiation wavelengths. For a computer there is no such thing as color perception. It is only about wavelength measurements. Radar is a much longer wavelength so signs that are designed to reflect a higher wavelength that fall in the human vision can't be "read" by radar, nor could any emergency vehicle lights be "seen".


dreamstime_xl_28968104.jpg
 
Last edited:
The idea of adding radar and lidar is to improve the overall MTBF of the system. There will still be cases, like catastrophic failures, where the car will need to pull over. If the front cameras all shut down, the car would likely need to pull over. Radar and lidar will not eliminate all perception failures but they will reduce many perception failures. For example, radar and lidar can reduce failures like when vision gets the distance of an object wrong, or when vision gets the velocity of an object wrong, or when vision does not detect an object because the object is not in the training set.
I totally agree. I was reacting to your comment about redundancy. Without cameras, the system can't drive at all:

With vision-only, if the cameras fail, the entire system will fail or need to pull over. But with cameras, radar and lidar, your system is less likely to fail if the cameras fails. I think having extra sensors as back-up will really help to reach that extra reliability.
 
I totally agree. I was reacting to your comment about redundancy. Without cameras, the system can't drive at all:

You could have smart infrastructure where the traffic lights and road signs communicate info to the car by radio signal. A camera-less system could drive that way. Or you could have a camera-less car that only drives in areas that don't have traffic lights or signs. That would not be practical. In practice, you would never deploy an autonomous car without cameras since you would want it to be able to visually detect traffic lights and signs.

I guess I should clarify. My point was that with vision-only, the system is entirely dependent on the vision being accurate. So, if vision makes any type of mistake, there is nothing to compensate for that mistake. If that vision mistake is safety critical, the car could crash. But with cameras+lidar+radar, you now have extra sensor types that can compensate for many vision mistakes. For example, maybe your vision did not detect an obstacle because it was too dark and the car would have crashed but radar and lidar were able to detect the obstacle and so the car did not crash. So by adding radar and lidar, you can reduce many failures and crashes. In fact, one reason why many newer consumer cars have a front lidar or front radar is because they are very good at detecting obstacles and therefore reducing collisions.
 
Last edited:
You could have smart infrastructure where the traffic lights and road signs communicate info to the car by radio signal. A camera-less system could drive that way. Or you could have a camera-less car that only drives in areas that don't have traffic lights or signs. That would not be practical. In practice, you would never deploy an autonomous car without cameras since you would want it to be able to visually detect traffic lights and signs.

I guess I should clarify. My point was that with vision-only, the system is entirely dependent on the vision being accurate. So, if vision makes any type of mistake, there is nothing to compensate for that mistake. If that vision mistake is safety critical, the car could crash. But with cameras+lidar+radar, you now have extra sensor types that can compensate for many vision mistakes. For example, maybe your vision did not detect an obstacle because it was too dark and the car would have crashed but radar and lidar were able to detect the obstacle and so the car did not crash. So by adding radar and lidar, you can reduce many failures and crashes. In fact, one reason why many newer consumer cars have a front lidar or front radar is because they are very good at detecting obstacles and therefore reducing collisions.
Elon said the reason they dropped radar was because it was much less accurate and it kept confusing the vision system.
 
Elon said the reason they dropped radar was because it was much less accurate and it kept confusing the vision system.

Yes but that was only because the radar Tesla was using was low resolution. A poor quality radar will be less accurate and confuse vision. AV companies use much higher grade imaging radar that is very accurate and does not confuse the vision system.
 
  • Like
Reactions: pilotSteve
Yes but that was only because the radar Tesla was using was low resolution. A poor quality radar will be less accurate and confuse vision. AV companies use much higher grade imaging radar that is very accurate and does not confuse the vision system.
Elon also said this when before they dropped radar. He said HD radar would be beneficial back in like 2019 or 2020.
 
First, I want to acknowledge all the hard work of the Tesla FSD team. Tesla has spent years building a sophisticated vision-only system. And the perception part is very advanced. I am not saying that Tesla Vision is perfect. There are still gaps in the perception system. But I feel like Tesla has build a pretty good foundation for FSD. I am not suggesting Tesla start from scratch. On the contrary, I think Tesla should continue to build on that vision-only foundation.

But here are 3 things that I think Tesla should do in order to deploy a more reliable and more robust FSD system.

TLDR Tesla should copy Mobileye.

1) Crowdsourced maps
Tesla has a big fleet of vehicles on the road. They could leverage the vision system in every car on the road to crowdsource detailed maps, similar to what Mobileye is doing. With such a large fleet of vehicles, Tesla could map large areas pretty quickly. Tesla could probably map every road in the US in a relatively short time. And with such a large fleet of vehicles on the road, Tesla could also update the maps pretty easily too since there would always be a Tesla somewhere checking the maps. A lot of the errors that FSD beta makes seem to be due to poor map data. Crowdsourcing could really help solve those issues since there would be a Tesla likely checking that spot pretty regularly. And detailed maps could help make FSD more robust. With crowdsourcing, only the first car would need to drive the road mapless, other cars that encounter the road later, would have the benefit of a map as a prior. And detailed maps can provide useful non-visual info like slowing down for a bend in the road that you can't see because of obstacles or preferred traffic speed different from actual speed limit.

2) Driving Policy
Tesla has done a lot of work with perception but one area where FSD Beta is very weak, is driving policy IMO. For example, FSD beta is poor at knowing when to change lanes when traffic is dense to avoid not missing an exit. FSD beta can wait too long and then miss its chance to merge because of dense traffic. Also, FSD beta can be overly cautious at intersections when there is no traffic at all. FSD beta can also be too shy when going from a stop sign or too aggressive when making unprotected left turns. These are issues that better driving policy would help with. It would improve the driving decisions of the car and make for a safer and smoother ride. Mobileye has a very good RSS policy that helps the car drive safely. So I think Tesla needs to focus more on driving policy. I think FSD Beta would benefit greatly from a driving policy.

3) Sensor redundancy
I think Tesla is smart to focus on vision-only. This is important as a foundation for perception. And I think vision-only will work great for L2 "door to door". So what I am proposing is that Tesla continue to do vision-only for L2 but work on a lidar-radar subsystem that could be added on top of the existing vision-only FSD system to provide extra reliability and redundancy, that could help get the system to "eyes off". This is essentially what Mobileye is doing and I think it is smart. I think vision-only is fine for L2 but having radar-lidar as a back-up is crucial for "eyes off". This because in order to do "eyes off", you really need to be able to trust the system to be super reliable in all conditions. Vision-only cannot do that. With vision-only, if the cameras fail, the entire system will fail or need to pull over. But with cameras, radar and lidar, your system is less likely to fail if the cameras fails. I think having extra sensors as back-up will really help to reach that extra reliability.

View attachment 964473
"Full Self Driving Tesla" by rulenumberone2 is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
My 2014 Tesla Model S had Mobileye. No false alarms. The car drove like it was on rails. Some features were actually better than, today, nine years after Tesla engineers have been dicking around with FSD. It was amazing. Musk got in a spat with the Mobileye President and that was the end of that party. We all lost. I don't expect to see anything worthwhile for at least another five years. Those who pay for it and even compete to get must enjoy throwing away money and be living in a dream.
 
My 2014 Tesla Model S had Mobileye. No false alarms. The car drove like it was on rails. Some features were actually better than, today, nine years after Tesla engineers have been dicking around with FSD. It was amazing. Musk got in a spat with the Mobileye President and that was the end of that party. We all lost. I don't expect to see anything worthwhile for at least another five years. Those who pay for it and even compete to get must enjoy throwing away money and be living in a dream.
Same here. My 2015 AP was excellent.
My 2022 AP is useless.
 
  • Like
Reactions: Fairchild