Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Autonomous Car Progress

This site may earn commission on affiliate links.
Yes, I saw that. I do like that Wayve has spelled out specific product types that they plan to sell to OEMs. It is a positive step towards commercialization.

And yes, I think they aim to compete with Mobileye. It seems they plan to offer a very similar product and they plan to sell to OEMs like Mobileye. It will be interesting to see if Wayve's e2e with no HD maps approach allows them to train the software faster and overtake Mobileye. Obviously, Mobileye has a huge head start in terms of number of OEM partners.

But considering that Wayve's e2e approach is very similar to Tesla's, I think they are indirectly competing with Tesla as well. After all, if Elon is right that Tesla wants to license FSD to OEMs then Tesla will also be competing with the same OEMs that might look to Wayve. But one key difference with Tesla is that Wayve is planning to add radar for a L3 product. My guess is that they might add lidar too for a L4 product. I think one of their slides shows cameras, lidar and radar as possible sensors for their e2e. So that puts Wayve closer to Mobileye IMO as Wayve is also taking an approach of vision first but add radar/lidar later for extra reliability. Unlike Tesla which is still insisting they can commercialize L4 with vision-only. Personally, I think Wayve could have the edge here over Tesla because I think OEMs will like the extra sensor redundancy.

Personally, I don't think any OEM is going to go for Tesla's FSD. I think it is a matter of liability. Tesla's ODD is too big so the risk is too high. OEMs are not going to want that level of liability with a vision-only system in such a wide ODD where anything can happen. I think Tesla would be better off focusing on a more clearly defined product like say "L3 hands-free highway". Tesla has a good foundation in FSD. So take FSD, add radar, limit the ODD, validate safety, add better driver monitoring, and Tesla could sell something to OEMs that I think they would go for.

I think Wayve's 2 product categories make sense. We know the e2e approach is capable of self-driving but requires supervision. So it makes perfect sense to me to package a L2+ product where the car can self-driving on say highways but with supervision. Yes, I think the L4 product is aspirational at this point. Wayve hopes to get their tech good enough at some point to remove driver supervision. Some type of L4 in a geofence, where they can validate safety, makes sense to me as a future goal.
I don't count Tesla as.a player as of today. No one will do business with Elon, tbh, unless they can prove it works beforehand.

Mike Ramsey, VP of automotive, transportation and cross-manufacturing at Gartner said said thei pretty much here recently:
 
I don't count Tesla as.a player as of today. No one will do business with Elon, tbh, unless they can prove it works beforehand.

Yes, that is why I said that no OEM will license current FSD because liability is too high. OEMs need assurances of safety. They are not going to license a vision-only system just on Elon's word that it is safer than humans.
 
Last edited by a moderator:
  • Like
  • Informative
Reactions: DanCar and flutas

I wonder why he thinks that cameras and radar will be sufficient in the future if they're not now. Is he expecting an improvement in the software to process camera inputs, an improvement in the quality of radar hardware, or some sort of improvement in sensor fusion?

That would also be a massive-blow to innovation if regulators require a certain piece of hardware.
 
  • Like
Reactions: JB47394
I wonder why he thinks that cameras and radar will be sufficient in the future if they're not now. Is he expecting an improvement in the software to process camera inputs, an improvement in the quality of radar hardware, or some sort of improvement in sensor fusion?

That would also be a massive-blow to innovation if regulators require a certain piece of hardware.
Progress in computer vision basically and sensor tech in general.

I personally think it will be a while, if ever. Too many weird failure modes (an image of a road on a truck etc). I think Lidar will drop in price faster than there is progress in computer vision, now that more OEM:s (esp in China) have them in their models.

 
Last edited:
  • Like
Reactions: Goose66
I wonder why he thinks that cameras and radar will be sufficient in the future if they're not now. Is he expecting an improvement in the software to process camera inputs, an improvement in the quality of radar hardware, or some sort of improvement in sensor fusion?

Yes, I think he is assuming that there will be improvements to the software both for camera vision and radar that will make camera+radar perception good enough to support L3+ autonomous driving. And it is a logical assumption because computer software is improving all the time. We have already seen massive improvements in computer vision and radar in recent years. Years ago, computer vision could only do basic object detection and required stereo vision to do any type of distance measurements. Now a single camera can extract all the information needed for driving (occupancy networks, road geometry, lanes, road signs, hand gestures, traffic lights, curbs, crosswalks, pedestrians, cyclists, vehicles, trucks, etc...) as well as do distance and velocity measurements with decent accuracy. Likewise, we have seen huge improvements in radar. Radar used to be very low resolution. It could be used for basic cruise control to maintain a safe distance from a lead car but that was about it. And low res radar cannot detect static objects so it can cause collisions with stopped vehicles on the road. But now, we have imaging radar that is very high resolution and can accurately measure position and velocity of both stopped and moving objects, and separate between objects close to each other, like distinguishing a pedestrian standing next to a stop sign. It stands to reason that with camera vision that can do all perception on its own and high res radar that can accurately detect and measure position and velocity of static and moving objects in great detail, that with further improvements to reliability, camera + radar will be sufficient for L3+.

That would also be a massive-blow to innovation if regulators require a certain piece of hardware.

Yeah, I don't think that regulators should mandate a certain tech. Regulations should be tech agnostic. They should set safety standards and let manufacturers use whatever tech they want if they can achieve those safety standards.