The only distinction between L2/L3/L4 is what the company claims it is, just like I said how many posts ago? This whole thing is you blindly agreeing with Waymo / Cruise over terms, then turning around and saying those terms aren't right against Tesla.
I never said those terms don't apply to Tesla. L2/L3/L4 have clear engineering definitions. They are not whatever the company claims they are. I am not blindly agreeing with Waymo or Cruise. I am using the official industry definitions of the levels. Waymo is not L4 because I say so or Waymo says so, it is L4 because the system meets the technical, engineering definition of L4. And Tesla is not L2 because I say so but because Tesla FSD beta meets the technical, engineering definition of L2.
If FSD, instead of disengaging when confused, simply showed 3 options on the screen and it would do nothing until you selected one of them as the "appropriate route" would that be L4 since you never disengaged it and the driver didn't have to touch the steering wheel or pedals? No. But that's what you're claiming is okay for Waymo/Cruise.
You are twisting my words. Yes Tesla would be L4 in that scenario. If the system performs all dynamic driving tasks and the human only provides routing guidance, it is L4. That is true for Tesla or Waymo. If Tesla FSD performed all driving tasks and the human never touched the controls, and it gave 3 options for routing, yes, it would be L4. But that is not how FSD beta works right now. Tesla FSD beta does require the human to sometimes perform driving tasks.
Hey look, conflation again! How fun.
Waymo/Cruise both were designed to never be disengaged and would instead stall and ping for someone to choose for it...that's completely different than a disengagement though and therefore it doesn't count as one for the reports. That's why all those others are so bad, but Waymo and Cruise are golden children who have solved autonomy. Hey look, I figured out how to make disengagement free robotaxis, just don't disengage it ever.
What?! I never said that Waymo has solved autonomy, only that I believe they are ahead. Being ahead and solving autonomy are two different things. I have acknowledged that Waymo still has issues to solve.
If the system stalls and pings for help, that can be very bad. Just look at Cruise. So yes, a robotaxi that never disengages can still be bad autonomy. Just because it never disengages, does not mean that it is solved autonomy.
Basically, you can have two types of autonomy:
1) autonomy with a safety driver that disengages when there is a problem.
2) autonomy that is driverless so it uses remote assistance when there is a problem.
Both can be bad autonomy. It depends on the problems and how often it happens. You could have an autonomous car that has a safety driver but only has 1 disengagement every 1M miles. I would say that it is better than a driverless robotaxi that requires remote assistance every 1 mile. You could also have an autonomous car with a safety driver that requires a disengagement every 1 mile. I would say that it is inferior to a driverless robotaxi that only requires remote assistance every 1M miles.
We also need to understand that safety drivers and remote assistance are usually used for different types of "interventions". If the autonomous driving has safety critical issues like driving in the oncoming lane or crashing into another car, you probably want a safety driver who is in the car and can intervene in a split second to prevent an accident. But if the autonomous driving is safe enough but it might have routing issues or it might "stall" but you deem it not a collision risk, you might just use remote assistance since they can guide the car but don't need to intervene in a split second. That is why we generally see driverless with remote assistance as "better" because it usually indicates the company has more confidence in safety and that the issues of the autonomous driving are not safety critical. But that is not a hard rule. As we saw with Cruise, driverless with remote assistance can still be bad and cause safety concerns. Driverless is not necessarily better than a safety driver.
Ultimately, what really matters is how often the autonomous driving has a failure and what type of failure it is, for a given ODD. That is what determines how good the autonomous driving is in that ODD. Companies will make determinations about the safety and reliability of the autonomous driving and decide if removing the safety driver is acceptable risk or not for a given ODD.