You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
The current thinking by Elon & co is no.
I started this thread because I thought that even if LIDAR is not strictly needed, it might still be good to have for redundancy. But after watching the presentation, I think Elon and Karpathy made a compelling case for not using LIDAR.
I thought that was the least compelling part after Elon’s jokes. Why even discuss Lidar at all? Just show your greatness. Dissing on Lidar didn’t go down well in the media either. It was not believable. They should have just focused on what makes their solution great instead.
Sure Mobileye and Waymo are ahead in terms of tech but Tesla is catching up fast. Yesterday, Tesla proved that they have excellent hardware, very good camera vision, they don't need LIDAR, and they have the fleet learning. And Tesla can deploy their software to more cars faster.
By the way, your signature is super annoying and obnoxious.
It troubles me that you feel you need to keep your location confidential.
I thought that was the least compelling part after Elon’s jokes. Why even discuss Lidar at all? Just show your greatness. Dissing on Lidar didn’t go down well in the media either. It was not believable. They should have just focused on what makes their solution great instead.
It's troubling that you think location matters. Isn't Tesla trying to accomplish autonomy without regards to specific locations?
By the way, your signature is super annoying and obnoxious.
Level 5 (no steering wheel required) robotaxi.
To actually be level 5 does the car have to handle things like a police officer coming to the window and instructing the car to turn around and drive the wrong way on the highway because the road is blocked ahead?
My perspective on this. When I did my master thesis we could choose between doing Lidar SLAM or Camera SLAM. We choose Lidar because we felt it suited us better compared to the other team who were more suited for the camera project.
The camera team had it somewhat easier because they could pretty much just download ORBSLAM and have a fancy demo running without too much work. We as the Lidar team had struggle with many of the steps ourselves, such as feature point extraction, which back then was far from trivial. But we had an easier time with particle filters etc for positioning.
It seems that we have two fields converging:
- Probabilistic Robotics, Sebastian Thrun et al. Particle filter, graphSLAM and classical hand made tools trying a little bit of Machine Learning
- Computer Vision, Andrej Karpathy et al. CNN and other computer science tools trying a little bit of Robotics
The probabilistic robotics guys love their Lidars, it works in the same bird’s eye framework as they see the world. The Computer Vision guys love their cameras, the input comes in a nice structured matrix, the same way as they see the world.
We are now seeing deep learning making great depth maps out of camera images and we are seeing classical point clouds from camera images making great object detections. The first runs great on GPUs/TPUs, the latter will complicate how to pipe the code a lot... But the main takeaway is that the two fields are starting to overlap. A very interesting fusion of domains that will confuse a lot of people in both domains.
We are at a time where we have a lot more computer scientists coding than we have roboticists coding, but we have more roboticists building vehicles than we have computer scientists building vehicles. Cameras are cheaper, they are passive sensors. Lidars are getting cheaper fast but there will likely always be a difference of some magnitudes. Lidars rely less on intelligence, if you don’t get a reading in front of you, you can be pretty certain that there is free space in front of you. But with some clever software and a gigantic amount of data the camera is catching up. Thus the price and power benefits starts to favor the camera.
Imo at this point, cameras are easier to work with but hard to do well. Lidars are hard to work with, but easier to do well. I think Teslas approach will turn out to be the right one and I am very impressed by Elon’s ability of coming to this conclusion much earlier than most other experts. I was wrong on this.
Wow! Did you even watch the presentation? The opinions presented on LIDAR were in direct response to analysts questions about LIDAR. They would have looked really foolish to say "We're not going to discuss LIDAR today."
Yes, including the part (on which I commented on the Autonomy Investor Day thread) where Karpathy went off-character and added a clearly agreed-upon diss on Lidar to his otherwise stellar presentation. It was a strawman and not believable — and neither were Elon’s answers and negative comments on Lidar.
This my opinion, you have yours and I respect that.
I don't know how Musk could answer the question about why they don't use LIDAR without explaining the (negative) reasons why they won't be using LIDAR!
There is nothing wrong with speaking negatively about the lack of abilities of a technology, it's not like LIDAR has feelings that Musk needs to protect! He was simply explaining why they don't use it and why they have no plans to use it in the future (at least not for autonomous driving).
It's not clear to me why you refuse to disclose your location. What could you possibly have to hide?
I think everyone chooses themselves what they disclose online and it seems normal netiquette to respect that. Secondly even when people disclose a location, there is no way for regular readers to verify that anyway.
This ignorance is real for a lot of folks here.
You don't need lidar when you get almost the same results of a point cloud system using imaging.
1 You NEED imaging for autonomy
2. A point cloud system can be obtained from vision but then ask why do you need lidar...
We all know that Musk has been quite adamant that he is against LIDAR. And a few years back, I think it was somewhat understandable. Back then, LIDAR was clumsy and expensive. There was just no way that Tesla could afford to put LIDAR in every car they sell, not to mention the issue of ruining the aerodynamics and aesthetics of the cars with a LIDAR tower on the roof. So I think back then, it made sense for Tesla to go the camera vision only route. After all, if they could manage to achieve the same result with camera vision only for a fraction of the cost of LIDAR, why not?
But today, these problems with LIDAR are pretty much solved. LIDAR is cheaper and smaller. Tesla could integrate LIDAR in the car in a way that does not ruin the aerodynamics or the aesthetics of the car at a much more affordable cost. Plus, there is no question that even if Tesla does manage to achieve great things with camera vision alone, LIDAR would offer more redundancy and make true Full Self-Driving much more robust. In other words, even if camera vision works, why not have that extra redundancy of camera vision, radar and LIDAR to give the car an even fuller picture of the environment to make FSD even better? There is no downside. So I am thinking that Tesla will eventually cave in a few years and add LIDAR to the FSD hardware.
Thoughts?