victorplus
Member
I bet anything that Tesla already has test vehicles out there running FSD that can meet or exceed what Mobileye has been showing off with their system in this video, but that they're simply waiting to "flip the switch" to release it to all cars with FSD once it's ready for prime time. I was losing faith in Tesla's ability to get FSD to work with current hardware until I saw this. After all, as humans we have 2 eyes, pointing in one direction (typically straight) with no radar or ultrasonic sensors, and we generally go just fine. Why can't 8 cameras all around + forward long range radar + 12 ultrasonic sensors do the same or better with excellent software/processing behind them? I'm optimistic now![/QUOTE]
Mobileye is the most advanced on the way to vision only self driving, and Tesla is taking the same road (even if there HD mapping is less clear and less advanced, they should be able to get there). But if you follow closely Mobileye development they are aiming for an independent Lidar only self driving working on their EyeQ5 to get the redundancy required for a true level4/5.
Vision only can work very well, but is it well enough to cover all the very complex edge cases required to get to a high reliability level, its less than certain. Human don't have 8 camera and radar but they can understand context. A simple shadow can be very tricky even for the best NN vision system, so the system is not sure if there is something or not.
Mobileye is the most advanced on the way to vision only self driving, and Tesla is taking the same road (even if there HD mapping is less clear and less advanced, they should be able to get there). But if you follow closely Mobileye development they are aiming for an independent Lidar only self driving working on their EyeQ5 to get the redundancy required for a true level4/5.
Vision only can work very well, but is it well enough to cover all the very complex edge cases required to get to a high reliability level, its less than certain. Human don't have 8 camera and radar but they can understand context. A simple shadow can be very tricky even for the best NN vision system, so the system is not sure if there is something or not.