Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo

This site may earn commission on affiliate links.
Ok - let us just say levels are as useful in autonomy as leeches are in modern medicine ;)
Your lack of understanding of a subject does not inherently make it stupid. The levels are a useful taxonomy to describe the role of the driver/passenger. You might not personally like it or understand it as evident, but it is what the entire industry is using currently to define their level of automation. In a similar vein, leeches are used in modern medicine to prevent clotting in a localized area post-surgery because they secrete compounds that prevent clotting. Just because you don't understand it as evident does not make it stupid.
 
An autonomous car that stops a lot is still autonomous. But another criterion should be whether the autonomous car can traverse a given route within some percentage of the time that it takes a human driver to do the same. If a typical route from point a to point b takes one hour, when does the trip time become financially impractical for an autonomous car to follow the same route? One hour and ten minutes? 1:20? More?
 
Thats nonsense. Don't go by the stupid levels - use common sense.

If the car stops every 100 ft for 1 1/2 minutes, its *way* less autonomous than a car that stops every 100k miles.
As Bitdepth correctly pointed out, if the car stops every 100 ft but is still in autonomous mode, then it is bad autonomy, but still autonomy. It is not less autonomy, it is just bad autonomy that would not practical for a commercial product.
So what I'm hearing is

1. Tesla should just deploy FSD beta as robotaxi today.
2. Make each car owner watch a video on how to appropriately respond to the car.
3. Decrease it's appetite for anything risky and just have it ping the vehicle owners phone whenever it's unsure while not disabling FSD RoboTaxi.

Clearly that would just fall into L4 automation right and be as good as Waymo or Cruise in the real wold, right? ;)

I'd sit at home on my couch all day responding to pings from the car while making money instead of having an actual job, I mean who wouldn't.
 
Last edited:
  • Funny
Reactions: diplomat33
Apparently you hear without listening ;)
Nah, just trying to point out the absurdity of statements like "if the car stops every 100 ft but is still in autonomous mode, then it is bad autonomy, but still autonomy."

If that was true and what we want to judge by, Tesla could easily do exactly what I said. It would be a bad look for them perception wise, but it would technically (according to some memebers here) be a L4 deployment of robotaxis.
 
Nah, just trying to point out the absurdity of statements like "if the car stops every 100 ft but is still in autonomous mode, then it is bad autonomy, but still autonomy."

If that was true and what we want to judge by, Tesla could easily do exactly what I said. It would be a bad look for them perception wise, but it would technically (according to some memebers here) be a L4 deployment of robotaxis.
An autonomous car that stops every 100 ft might make for a great postal delivery vehicle.
 
Nah, just trying to point out the absurdity of statements like "if the car stops every 100 ft but is still in autonomous mode, then it is bad autonomy, but still autonomy."

The industry makes a distinction between the design of the system and the reliability of the system. If the system is designed to perform driving tasks itself without a human, then it is an autonomous driving system. This is in contrast to a non-autonomous driving system where a human performs all the dynamic driving tasks. And that is why we have the SAE levels, to clearly define between systems that are designed to only perform some of the driving tasks (partial autonomy), or all of the driving tasks in a given ODD (full autonomy). Whether it is reliable or not to be deployed to the public is a separate question. So yes, if the car has to stop a lot to figure things out, but the car is performing all dynamic driving tasks without a human, then it is autonomous, just bad autonomy.

In fact, we see plenty of autonomous driving systems that are not ready to be deployed to the public but they are still considered autonomous. Look at the CA DMV disengagement reports. There are plenty of companies who have registered L4 with the CA DMV but report terrible disengagement rates. They have horrible disengagement rates but the systems are still L4 since they are designed to perform all dynamic driving tasks in a limited ODD. They are just still in the testing phase of L4, not the deployment phase of L4. So it is not absurd: It is possible to have autonomous driving systems that are at various stages of development or deployment readiness. They are still autonomous driving systems.

If that was true and what we want to judge by, Tesla could easily do exactly what I said. It would be a bad look for them perception wise, but it would technically (according to some memebers here) be a L4 deployment of robotaxis.

If FSD beta were L4, then yes, Tesla could deploy it even if it was bad and it would still be L4. But it is only L2. So it is not capable of being a robotaxi.
 
Last edited:
  • Like
Reactions: Bitdepth
Nah, just trying to point out the absurdity of statements like "if the car stops every 100 ft but is still in autonomous mode, then it is bad autonomy, but still autonomy."
The statement is not absurd. How good something is, has no bearing on what level of automation it is assigned. It all boils down to design intent.
If that was true and what we want to judge by, Tesla could easily do exactly what I said. It would be a bad look for them perception wise, but it would technically (according to some memebers here) be a L4 deployment of robotaxis.
Tesla released a summon software that can't navigate a parking lot at 5mph without hitting a curb. Tesla released autopilot which has been responsible for several fatalities. Tesla released FSD beta which does stupid sh*t like drive into oncoming lanes. Tesla is not afraid of releasing software that is not ready into consumer hands. The difference is, at L3 and above the liability falls on them so they won't declare it as L3 or L4. They will release software into consumer hands and call it L2 because they do not have a software capable of driving more than 100 miles without an incident and are not ready to take liability. So no Tesla couldn't easily do exactly that. Its been 9 years now and several software rewrites, if it were that easy they would release the software and call it L4. Look at Uber and look at Cruise should tell you that releasing software that is not ready at L4 is recipe for disaster.
 
If FSD beta were L4, then yes, Tesla could deploy it even if it was bad and it would still be L4. But it is only L2. So it is not capable of being a robotaxi.
The only distinction between L2/L3/L4 is what the company claims it is, just like I said how many posts ago? This whole thing is you blindly agreeing with Waymo / Cruise over terms, then turning around and saying those terms aren't right against Tesla.

How about another example.

If FSD, instead of disengaging when confused, simply showed 3 options on the screen and it would do nothing until you selected one of them as the "appropriate route" would that be L4 since you never disengaged it and the driver didn't have to touch the steering wheel or pedals? No. But that's what you're claiming is okay for Waymo/Cruise.

In fact, we see plenty of autonomous driving systems that are not ready to be deployed to the public but they are still considered autonomous. Look at the CA DMV disengagement reports. There are plenty of companies who have registered L4 with the CA DMV but report terrible disengagement rates. They have horrible disengagement rates but the systems are still L4 since they are designed to perform all dynamic driving tasks in a limited ODD. They are just still in the testing phase of L4, not the deployment phase of L4. So it is not absurd: It is possible to have autonomous driving systems that are at various stages of development or deployment readiness. They are still autonomous driving systems.
Hey look, conflation again! How fun.

Waymo/Cruise both were designed to never be disengaged and would instead stall and ping for someone to choose for it...that's completely different than a disengagement though and therefore it doesn't count as one for the reports. That's why all those others are so bad, but Waymo and Cruise are golden children who have solved autonomy. Hey look, I figured out how to make disengagement free robotaxis, just don't disengage it ever.

1702680088831.png


According to your own words Omar's videos are examples of fully disengagement free drives because he never needs to disengage FSD, who cares if it stopps in the middle of an intersection or turns into oncoming traffic. It never needs to be disengaged so it's clearly autonomous.
 
The only distinction between L2/L3/L4 is what the company claims it is, just like I said how many posts ago? This whole thing is you blindly agreeing with Waymo / Cruise over terms, then turning around and saying those terms aren't right against Tesla.

I never said those terms don't apply to Tesla. L2/L3/L4 have clear engineering definitions. They are not whatever the company claims they are. I am not blindly agreeing with Waymo or Cruise. I am using the official industry definitions of the levels. Waymo is not L4 because I say so or Waymo says so, it is L4 because the system meets the technical, engineering definition of L4. And Tesla is not L2 because I say so but because Tesla FSD beta meets the technical, engineering definition of L2.

If FSD, instead of disengaging when confused, simply showed 3 options on the screen and it would do nothing until you selected one of them as the "appropriate route" would that be L4 since you never disengaged it and the driver didn't have to touch the steering wheel or pedals? No. But that's what you're claiming is okay for Waymo/Cruise.

You are twisting my words. Yes Tesla would be L4 in that scenario. If the system performs all dynamic driving tasks and the human only provides routing guidance, it is L4. That is true for Tesla or Waymo. If Tesla FSD performed all driving tasks and the human never touched the controls, and it gave 3 options for routing, yes, it would be L4. But that is not how FSD beta works right now. Tesla FSD beta does require the human to sometimes perform driving tasks.

Hey look, conflation again! How fun.

Waymo/Cruise both were designed to never be disengaged and would instead stall and ping for someone to choose for it...that's completely different than a disengagement though and therefore it doesn't count as one for the reports. That's why all those others are so bad, but Waymo and Cruise are golden children who have solved autonomy. Hey look, I figured out how to make disengagement free robotaxis, just don't disengage it ever.

What?! I never said that Waymo has solved autonomy, only that I believe they are ahead. Being ahead and solving autonomy are two different things. I have acknowledged that Waymo still has issues to solve.

If the system stalls and pings for help, that can be very bad. Just look at Cruise. So yes, a robotaxi that never disengages can still be bad autonomy. Just because it never disengages, does not mean that it is solved autonomy.

Basically, you can have two types of autonomy:
1) autonomy with a safety driver that disengages when there is a problem.
2) autonomy that is driverless so it uses remote assistance when there is a problem.

Both can be bad autonomy. It depends on the problems and how often it happens. You could have an autonomous car that has a safety driver but only has 1 disengagement every 1M miles. I would say that it is better than a driverless robotaxi that requires remote assistance every 1 mile. You could also have an autonomous car with a safety driver that requires a disengagement every 1 mile. I would say that it is inferior to a driverless robotaxi that only requires remote assistance every 1M miles.

We also need to understand that safety drivers and remote assistance are usually used for different types of "interventions". If the autonomous driving has safety critical issues like driving in the oncoming lane or crashing into another car, you probably want a safety driver who is in the car and can intervene in a split second to prevent an accident. But if the autonomous driving is safe enough but it might have routing issues or it might "stall" but you deem it not a collision risk, you might just use remote assistance since they can guide the car but don't need to intervene in a split second. That is why we generally see driverless with remote assistance as "better" because it usually indicates the company has more confidence in safety and that the issues of the autonomous driving are not safety critical. But that is not a hard rule. As we saw with Cruise, driverless with remote assistance can still be bad and cause safety concerns. Driverless is not necessarily better than a safety driver.

Ultimately, what really matters is how often the autonomous driving has a failure and what type of failure it is, for a given ODD. That is what determines how good the autonomous driving is in that ODD. Companies will make determinations about the safety and reliability of the autonomous driving and decide if removing the safety driver is acceptable risk or not for a given ODD.
 
Last edited:
  • Like
Reactions: BrerBear

0:00 Start
0:06 Moving the car at pickup
0:35 Pull out
1:02 Long winded discussion about queueing system
2:56 Green light launch on 40 mph road
4:28 Awkward trajectory for lane split
7:41 Unprotected right turn
8:07 Yielding to reverser
8:30 Machine gun turn signal & indecisive route
9:06 Stuck at stop sign for unknown with remote assist
10:06 Unprotected left turn onto multilane road
11:21 Slowing for unknown
11:43 Unprotected left turn
15:16 Tracking a lot of pedestrians
16:10 Nudging for vehicle with open door
16:29 Yielding to multi-point turn vehicle
17:15 Nudging for parallel parking car
17:37 Nudging for pedestrian in street
21:04 Entering parking lot
25:40 Smart key battery low
26:51 Slowing for vehicle with lights on
30:26 Two aborted pullovers
31:37 Coco sidewalk delivery robot
31:49 Unprotected left turn violating oncoming’s right of way
32:08 Pull over
 

I think Brian Wilt makes a good point here. Waymo will likely improve safety over time. So as good as these numbers look, they are actually the worst Waymo will be, meaning over time, Waymo will reduce even more accidents.

The wildest part about Waymo's safety statistics is that not only are they much safer than human drivers today, but that /this is the most dangerous they've ever going to be/.

 
  • Like
Reactions: spacecoin