Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki Consumer AV - Status Tracking Thread

This site may earn commission on affiliate links.
Is this highway only or urban too i.e. is this like FSD or NOA ?

Point-to-point​
automated navigation​
& adaptive cruise​
control​
NZP is now rolling out to city streets. So it is now urban too. So it is point to point, automated navigation on both highway and city streets like FSD beta.
no urban yet, still just point-to-point highway but with new enhanced LCC for city streets.
 
no urban yet, still just point-to-point highway but with new enhanced LCC for city streets.

Thanks for the correction. The promo video you posted showed point to point nav on city streets and the announcement from Zeekr said NZP was rolling out to city streets and NZP is point to point. So it made me think that it was point to point nav on city streets.
 
Last edited:
@EVNow , according to this tweet by Mobileye, "Urban NZP" is undergoing testing and is planned to be released as a beta in the second quarter of the year. "urban NZP" is different from LCC+. So @Bladerskb is correct. LCC+ is available now, "urban NZP" is being tested and will go beta later this year.

 
  • Informative
  • Like
Reactions: EVNow and spacecoin
VW is deepening collaboration with Mobileye. Audi, Bentley, Lamborghini and Porsche will get Mobileye tech:

The Volkswagen Group is welcoming further strategic collaboration and significantly accelerating its development efforts in the field of automated and autonomous driving. Now, Volkswagen is intensifying its partnership with Mobileye in the domain of automated driving. Together, the two companies will bring new automated driving functions to series production. Mobileye is to provide technologies for partially and highly automated driving based on its Mobileye SuperVision and Mobileye Chauffeur platforms.
In the future, the Volkswagen Group’s Audi, Bentley, Lamborghini and Porsche brands will use these technologies to rapidly introduce new premium-oriented driving functions to their model portfolios across powertrain types. These include advanced assistance systems for highway and urban driving, such as automated overtaking on multilane highways in permitted areas and conditions, as well as automatic stopping at red lights and stop signs, and support in intersections and roundabouts. In addition, Mobileye is set to supply further technology components for automated driving to Volkswagen Commercial Vehicles. In the long term, the Volkswagen Group aims to rely on its own complete in-house system: Partnerships with Bosch and Qualcomm, as well as with Horizon Robotics in China, will be continued with a focus. All driver assistance systems are to be based on the software architectures developed by Volkswagen’s Cariad company.

 
  • Informative
Reactions: EVNow
I'm surprised that the US delivery for Polestar 4 is happening so soon (this year) but just as I suspected, the Polestar 4 configuration which is now open for US and doesn't include ramp to ramp highway nor urban just basic LCC/ACC with lane change.

Looks like its gonna take them a long time to get this activated similar to how long it took them with Zeekr in China. Even the China version after 3 months only have LCC/ACC.

They are moving so slow compared to Tesla. That I now believe tesla even with compromised sensors in HW4, will get to limited ODD L4 in geofenced areas before mobileye. And that mobileye will never reach L4 unless they adopt heavy prediction & ML planner.

This isn't to say that Tesla is doing something amazing that other SDC companies aren't doing.
But that Tesla will continue to adopt SOTA techniques as they become available.
While Mobileye will still be using traditional old architectures like CNN, etc.
As long as the SOTA breakthroughs keep happening (outside of Tesla), Tesla will continue to make use of them to the points its enough to get them there. My date however hasn't changed, 2030 is the year camera only L4 solutions would become viable/possible. I made this statement back in 2020 and even earlier and still believe there is still 6 years of improvements to CV coming from academia and other companies (google/deepmind, waymo, facebook, microsoft, etc) that will make it possible.

Polestar 4 config
 
Last edited:
I'm surprised that the US delivery for Polestar 4 is happening so soon (this year) but just as I suspected, the Polestar 4 configuration which is now open for US and doesn't include ramp to ramp highway nor urban just basic LCC/ACC with lane change.

Looks like its gonna take them a long time to get this activated similar to how long it took them with Zeekr in China. Even the China version after 3 months only have LCC/ACC.

They are moving so slow compared to Tesla. That I now believe tesla even with compromised sensors in HW4, will get to limited ODD L4 in geofenced areas before mobileye. And that mobileye will never reach L4 unless they adopt heavy prediction & ML planner.

This isn't to say that Tesla is doing something amazing that other SDC companies aren't doing.
But that Tesla will continue to adopt SOTA techniques as they become available.
While Mobileye will still be using traditional old architectures like CNN, etc.
As long as the SOTA breakthroughs keep happening (outside of Tesla), Tesla will continue to make use of them to the points its enough to get them there.

I agree Mobileye is moving very slow compared to Tesla. But I don't think you can blame it entirely on the lack of SOTA techniques. Yes, Tesla's use of SOTA techniques gives them an advantage. But the fact is that Mobileye could use the latest SOTA techniques today and they would still be slower than Tesla IMO. A big reason for that is because Mobileye is dependent on OEMs while Tesla controls the deployment cycle and is aggressive in releasing "beta" software.

Mobileye only sells the hardware and software to OEMs who then do their own customization and deploy the software on their own timeline. And many OEMs manufacture car models on a yearly basis, only release OTA updates on a quarterly basis, build their own UI, customize the ADAS via DXP, do their own validation which can take months, and generally shy away from risk. They deliberately release more basic ADAS features in limited ODD first and only release the more advanced ADAS on their premium models. And when Mobileye does improve their SV hardware or software, it has to go to the OEM and it is up to the OEM to install the new hardware and/or push the OTA software update to the cars on their timeline. All this adds up to a much slower, more cautious deployment. That is why I say that even if Mobileye did use SOTA techniques, they would still be slower than Tesla because any software updates would go to the OEMs first who then would push it to their cars on their timeline, after they do their own validation, customization etc... So, I don't think you can ignore the role of OEMs in Mobileye's slower approach. Mobileye could adopt all the SOTA techniques today and I bet you Zeekr and Polestar would still take a slow, incremental approach to deploying SV on city streets.

Case in point, Zeekr could have pushed SV on city streets to their entire fleet 2 years ago as beta if they had wanted to. But it is Zeekr in collaboration with Mobileye that chose the more standard, incremental approach. Same with Polestar. The fact is that Polestar could deploy SV on city streets now as beta if they wanted to. But I suspect Polestar and Mobileye will follow a similar timeline as Zeekr. That is because they believe in the more cautious, incremental approach that fits their business model.

Compare to that Tesla which does everything in-house: they update their car models continuously, put the hardware in all the car models, train the neural networks and are willing and able to push the latest software version as "beta" within weeks directly to their fleet. That is going to be much faster than the old legacy OEM approach. So I think you could make the case that it is not so much that Mobileye is slow per se but rather than Tesla is faster, because they have adopted a high risk approach of deploying beta software as quickly as possible, emphasizing innovation and progress over safety.

In conclusion, I think you have to look at the approach itself and the business model, not just the use of SOTA techniques. Yes, the use of SOTA techniques will help Tesla achieve higher MTBF faster than Mobileye and help Tesla reach L4 on consumer cars first. But Mobileye has to deploy via OEMs whereas Tesla can deploy directly to consumer cars. The bottom line is that a cautious, incremental deployment through OEMs is going to be slower than a carmaker like Tesla who is willing and able to deploy beta software directly to their fleet. So I think Tesla's more aggressive deployment approach cannot be ignored. It is not just about Tesla using SOTA techniques and Mobileye sticking (so far) to older ML techniques.

My date however hasn't changed, 2030 is the year camera only L4 solutions would become viable/possible. I made this statement back in 2020 and even earlier and still believe there is still 6 years of improvements to CV coming from academia and other companies (google/deepmind, waymo, facebook, microsoft, etc) that will make it possible.

Polestar 4 config

I am curious. In 6 years, when you think CV will be good enough for L4, what do you think AVs will look like? Do you think Waymo would switch to vision-only then? Or do you think Waymo and others will still include radar and lidar for the added redundancy/safety especially since in 6 years, radar and lidar will likely be a lot cheaper than they are now?

Thanks.
 
But that Tesla will continue to adopt SOTA techniques as they become available.
Yes - the biggest strength of Tesla (and in a way a LOT of silicon valley companies) is that they are agile and willing to dump the old and adopt the new. Tesla has already rewritten FSD a few times - how many companies will do that ? Too many companies get into the "sunk cost" trap ...

My date however hasn't changed, 2030 is the year camera only L4 solutions would become viable/possible.
We are actually close there. When I bought FSD in 2019, I commented that let us revisit this in 10 years and see where we are (or something like that).
 
Mobileye only sells the hardware and software to OEMs who then do their own customization and deploy the software on their own timeline. And many OEMs manufacture car models on a yearly basis, only release OTA updates on a quarterly basis, build their own UI, customize the ADAS via DXP, do their own validation which can take months, and generally shy away from risk.
Years ago at MSFT, we used to say the problem with Microsoft's mobile strategy is exactly this - too dependent on OEMs. I don't know how many remember but Microsoft mobile came much earlier than both iPhone and Android - but it went nowhere.
 
  • Informative
Reactions: diplomat33
I don't think anyone will ever switch to vision-only for autonomy for cars. Why would they, when sensor cost is dropping like a stone?

Moore's Law kicks the sh!t out out of "Ai progress", so for every year that goes by the argument for camera-only gets less and less interesting.
I remember I saw a list of lidar equipped cars in China for MY 2022, and it was like a dozen cars.

I remember reading an article about some auto show in the last year in Asia and how 60+ cars had lidar.

You're 120% correct. Sensors are getting cheaper and cheaper.

There's so many 10 year old cars on the road today with 2 or 3 radar sensors in them too. Drive past almost any 3 generation old Mercedes C class and you'll see the blind spot light in the side mirror light up... That's 2 radars in the car right there. Possibly 3 if it came with adaptive cruise control.

Camera only is, like you said, getting less and less interesting. Sensor cost is not the bottleneck at all.

I'm not saying that camera only shouldn't be pursued or studied, just that it's not exactly interesting and it's certainly not what some people make it out to be: an insurmountable cost advantage. Just look at all the Audi's in the last 5+ years that have shipped with a lidar in the grille that's basically not being used for much...yet Audi is still selling well, i.e. not priced beyond one's means and Audi is still profitable??
 
I don't think anyone will ever switch to vision-only for autonomy for cars. Why would they, when sensor cost is dropping like a stone?

Moore's Law kicks the sh!t out out of "Ai progress", so for every year that goes by the argument for camera-only gets less and less interesting.

I remember I saw a list of lidar equipped cars in China for MY 2022, and it was like a dozen cars.

I remember reading an article about some auto show in the last year in Asia and how 60+ cars had lidar.

You're 120% correct. Sensors are getting cheaper and cheaper.

There's so many 10 year old cars on the road today with 2 or 3 radar sensors in them too. Drive past almost any 3 generation old Mercedes C class and you'll see the blind spot light in the side mirror light up... That's 2 radars in the car right there. Possibly 3 if it came with adaptive cruise control.

Camera only is, like you said, getting less and less interesting. Sensor cost is not the bottleneck at all.

I'm not saying that camera only shouldn't be pursued or studied, just that it's not exactly interesting and it's certainly not what some people make it out to be: an insurmountable cost advantage. Just look at all the Audi's in the last 5+ years that have shipped with a lidar in the grille that's basically not being used for much...yet Audi is still selling well, i.e. not priced beyond one's means and Audi is still profitable??
Even vacuums have Front/Rear dual solid state Lidar ...

0gD1m0Y.jpg
 
I am curious. In 6 years, when you think CV will be good enough for L4, what do you think AVs will look like? Do you think Waymo would switch to vision-only then? Or do you think Waymo and others will still include radar and lidar for the added redundancy/safety especially since in 6 years, radar and lidar will likely be a lot cheaper than they are now?

Thanks.
No because a combination of Camera/Ultra Res Imaging Radar/Lidar would still be orders of magnitude better, safer and have unlimited ODD at that point.

While vision only would be limited to sunny, light rain, light fog, light snow, etc.
 
No because a combination of Camera/Ultra Res Imaging Radar/Lidar would still be orders of magnitude better, safer and have unlimited ODD at that point.

While vision only would be limited to sunny, light rain, light fog, light snow, etc.

Thanks. That is what I thought. So even if Tesla does achieve vision-only L4 in 6 years per your prediction, using SOTA techniques, it would still have a limited ODD and be orders of magnitude less safe than what Waymo will likely have 6 years. Heck, in 6 years, Mobileye would likely adopt SOTA techniques too and they are willing to use radar and lidar. I would bet that Mobileye will likely have some L4 too in 6 years.

Honestly, I expect many companies to deploy L4 in 6 years, not all equal in safety or ODD of course, but still L4. Eventually, I think pretty much everyone will have some form of L4. So I wonder how much of a "first mover" advantage there really is. That's because any advantage from being first to L4 will probably only last a few years until others eventually catch up.
 
As long as the SOTA breakthroughs keep happening (outside of Tesla)

So I have a question about how Waymo is using SOTA that maybe you can shed some light on. Recently, in his fire side chat with Sebastian Thrun, Dolgov talked about the evolution of AI as it relates to Waymo. Here is the relevant quote:

"So we really leveraged that technology of transformers for behavior prediction, for decision making, for semantic understanding. And then since then, the models have been getting bigger. We've had big breakthroughs in GPT and Gemini and other LLMs, VLMs. So, this thing is kind of like the fundamental, the point I'm trying to make is that the fundamental evolution of the core basic technology is something that we've benefited in our domain, right? But in parallel, it affected language models and visual language models. Now, most recently, we've been doing work to kind of combine the two. And they're very nicely complementary. So, like this ML, the backbone of our driver, that has a very good understanding of the spatial reasoning, understanding of the task of driving, where all that work that we put in and evaluating and assuring the high performance of safety, it's very nicely complementary with the world knowledge and some of the common sense that you get from VLMs. So that's been the latest evolution of the system, it's kind of nearing the two and getting the best of both worlds."

It's the bold parts that peaked my interest. He talks about combining LLMs and VLMs. Can you speculate on what Waymo is doing? Are they moving towards end-to-end? Are they looking to incorporate LLMs and VLMs into specific parts like say the ML-first Planner? I know he talks about MotionLM that models behavior prediction as tokens like LLMs. So that might be one example of how they are incorporating LLMs into the behavior prediction stack. I know Anguelov recently gave some examples of how VLMs can understand a scene (you type a prompt and the AI can explain what is in the image). So maybe they are working to incorporate that into the Planner to give the Waymo Driver more common sense about what is happening around the car like a construction zone. Anguelov also gave an example of their simulation where they can type a scenario and the AI can generate the simulation based on the prompt. So maybe they are incorporating LLMs into their simulation to make it easier to generate scenarios.

Thoughts?

Thanks