Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Y'alls pretend like Tesla has no data advantage at all

Waymo's approach is basically the hydrogen of cars: dead end

I'm not sure I'd call it a "dead end". The Waymo approach is "not scalable" due to the expensive (and insufficient) LIDAR sensor suite, and dependance upon HD maps which will never be realtime (not to mention the dataset size problems) To drive from SFO to NYC you'd need a megabit SL connection. So I'd call Waymo "local use only" and "limited autonomy". Kinda like the city buses and trains we already have... :p

The ultimate limits of FSD however are much more like human drivers: Weather and road conditions, natural disasters, etc but without the fatigue and inatttention issues which plague human drivers. I still like the combo of Human+AI in the future, and I'm glad I got my Tesla while they still have a steering wheel and brake pedal.

Some day, I do hope to be able to "nap-in-the-back" while my Model Y drives between Supercharger sites, but you can be quite certain that I'll check the news and weather before I hop out of the Driver's seat. Oh, and I want a wake-up from the AI if *stuff* happens... and they'll get better at that too, not just the driving. :D

Cheers!
 
According to federal stats, there are 4.2 million miles of roads in the US. FSD can drive on all of them. (I've skipped Canada.)

Waymo are geofenced to SF, Phoenix, LA and Austin. Let's be generous and assume they can drive anywhere in these cities. That's roughly 10,000 miles of roads that Waymo can drive.

So, FSD can drive on approx 400 times more roads then Waymo. Therefore FSD is 400x as good!! Of course, this is absurd, but it's no less absurd than your silly "Waymo is 10x or 2000x better" exclamations based on nothing but made-up nonsense.
See stats and arguments below.
And anytime anyone backs up an argument just with "this is self evident" you know to examine the argument VERY carefully:
Feel free to examine the sh!t of of my responses.
-- Cameras, perception, object and gesture detection. These are simply means to an end. What counts isnt how many gizmos are gathering data or how many pixels you have. That's just specmanship. What counts is the end result. Can the car drive safely and predictably? Therefore this argument has zero relevance.
Okey, but they are an order of magnitude better in all or most areas (FoV, Range, has cleaning, higher res, better suseptability to cold/hot weather etc. etc.) They are clearly more expensive too, but it turns out you get better stuff if you pay for it.

Is it needed? Until Tesla or someone else can drive driverless with camera only, I'd say say to both the passive camera setup on Waymo and the added modalities. It's up to Tesla to prove that they actually can move the CV field and ML forward by a lot if they are to succeed with that hardware setup.
-- Planner Same argument as previous. I dont care if the planner uses goblins, as long as it works correctly.
But Tesla's doesn't any correctness guarantees. It randomly drives into oncoming traffic. Reliability matters inte unsupervised autonomy.
-- Safety record. Provide statistics: crashes categorized by severity, miles driven daily etc. You do realize that ANY Tesla crash that MIGHT involve FSD is jumped on by the press, right? Where are they? How many are there compared to FSD miles driven?
Waymo has millions of driverless miles, Tesla has zero. Again, self evident.
Also Waymo has actual statistical proof that it's safer than human in driverless mode:

https://arxiv.org/pdf/2309.01206.pdf

The latest FSDb release have 5 miles between disengagements or 91 miles per critical in a comparable ODD (city driving) according to FSD tracker: Home

-- Actual performance. Waymo drives well in the small number of locations it can drive. Tesla drives pretty well anywhere, though not (yet) as well as Waymo. Not clear how you argue Waymo is "better" based on this (see above).
You have no idea where Waymo could drive without safety guarantees. Again, Tesla gives none. My guess it that Tesla will never give any forms of guarantees on performance, and stay at L2, for the currently available HW3/4. Also see above.

What do you think has been doing the last 2-3 years? Rolling their thumbs? No, they work hard on increasing the system's capability, reliability/MTBF and rider conform/experience.
-- Waymo was driving 10-15 years ago. So what? I've been walking for decades, my son for 12 years, yet he's every bit as good at it as me (better, in some ways).

Waymo provide an interesting service, though their business model is risky (and Alphabet a brooding parent at best), and I've nothing against them or their approach, but jumping up and down claiming its wonderful and FSD is garbage is just plain silly.
FSDb is at present a good L2 with an interesting ODD. From a robotaxi perspective FSD is undeployable garbage and Waymo is deployed. Tesla's cabability might change over the remaining years this decade, but not realistically with v12 and not likely on existing hardware.

The head start clearly matters. Your analogy is a strawman, at best.
 
Last edited:
Waymo was driving 10-15 years ago. So what? I've been walking for decades, my son for 12 years, yet he's every bit as good at it as me (better, in some ways).

The head start clearly matters. Your analogy is a strawman, at best.
(I'll only reply to this point, as your other arguments simply rehash what I've already addressed.)

In fact, you have this particular argument backwards. As has been shown time and time again in the tech space the early player(s) rarely if ever emerge as the dominant long-term leader. There are various reasons for this, including the drag that a large legacy code base causes ("we would LIKE to re-write it, but we dont have the budget/time/management will"), an entrenched technological approach that is politically difficult to change (HD maps and cost-prohibitive LIDAR), and the ability for newer entrants to bypass the mistakes/dead-ends made (learning from the mistakes of the earlier entrants). This is why dominant tech players rise and then fall as more nimble newcomers embrace new capabilities not available to the earlier generation of products.

Waymo in fact set out to do exactly what Tesla are now trying to do, but backed off (so they claim) when they became concerned about the risks of an L2/L3 system. The robotaxis were essentially forced on them because they had to show SOME real-world progress to their masters (Google, now Alphabet). The problem is, Waymo are now massively entrenched in HD maps and all that this implies, and I dont see any way they can easily carry over their core model to a non-mapped design without a virtual re-boot, something that, for the reasons I noted above, is unlikely to happen. So no, this is not a straw man argument.
 
I hate this wait so much.
How long have you been on FSD Beta and are you an Early Access tester? Your join date is less that a year which indicates you may not be an Early Access tester. If you have been on Early Access then you would know the drill. Hurry up and wait is what we do every time there is a new FSD Beta release. 🤪

Also if you are not an Early Access tester and Tesla goes back to that model for V12 (which it is starting to look like) then you will have an even longer wait. :oops:

IMG_0769.jpeg
 
Last edited:
  • Like
Reactions: FSDtester#1
According to federal stats, there are 4.2 million miles of roads in the US. FSD can drive on all of them. (I've skipped Canada.)

Again though- FSD can drive on zero of them.

It can assist a human driver on all of them.

The refusal to admit or understand the massive difference there is leading to a lot of pointless discussion.

FSDb still lacks an OEDR that would ever enable driveless operation- as Tesla themselves have admitted in the CA DMV stuff. (sure, those docs are "old" now but nobody has shown any change in that fact in actual on-car behavior- it still demonstrates a lack of a full OEDR constantly)

FSDb is not an L4 system they're making safer

It's an L2 system that lacks fundamental things needed for >L2 operation.

MAYBE those things eventually get added in V12- or maybe they never do- but they aren't there now (and sure aren't there in anything pre 12).
 
FSDb still lacks an OEDR that would ever enable driveless operation- as Tesla themselves have admitted in the CA DMV stuff. (sure, those docs are "old" now but nobody has shown any change in that fact in actual on-car behavior- it still demonstrates a lack of a full OEDR constantly)

FSDb is not an L4 system they're making safer

It's an L2 system that lacks fundamental things needed for >L2 operation.

MAYBE those things eventually get added in V12- or maybe they never do- but they aren't there now (and sure aren't there in anything pre 12).

This is where I think reliance on an email to the California DMV about Autosteer on City Streets is leading you to draw incorrect conclusions.

For V11 and prior, you would be correct that there are multiple objects and events that FSDb was not programmed to recognize and respond to. It does not respond to school buses, school zones, yield signs, etc because it relied on explicit coding that would first allow the system to recognize these items, and second allow the system to respond appropriately.

But for V12+, there is no longer a distinction between the objects and events that FSDb can recognize and respond to, and the objects and events within the corpus of the training data. If there is sufficient training data covering a type of object or a type of event, V12 will eventually learn to correlate the sensor data encompassing those items with an appropriate response. The difficulty now is in collecting a broad enough set of training data such that the system is able to learn to generalize the broad categories of every conceivable object or event a vehicle could encounter on the road, and designing an efficient enough network that is able to condense that knowledge down into a size that can run with a reasonable framerate on the FSD computers.
 
This is where I think reliance on an email to the California DMV about Autosteer on City Streets is leading you to draw incorrect conclusions.

For V11 and prior, you would be correct that there are multiple objects and events that FSDb was not programmed to recognize and respond to. It does not respond to school buses, school zones, yield signs, etc because it relied on explicit coding that would first allow the system to recognize these items, and second allow the system to respond appropriately.

This is about half accurate---

Tesla has been using NNs for perception for YEARS. Karpathy and others have shown tons of videos and given presentations on how they've refined that over the years, but it's been the case since well BEFORE those DMV emails.

It was the planning and execution that was hard coded and is moved to NNs in V12.



So there's no reason if it couldn't recognize a school bus in V11 it should magically be able to in V12- it's using NNs in both cases to do so. There may be reasons around how it responds to one that differs--- but the fact they never added "stop for a school bus with a stop sign out" as a hard rule suggests perception was...less than ideal... at recognizing that and again nothing has changed in that regard for V12 (other than arguably HW4 where the cameras are higher res)



But for V12+, there is no longer a distinction between the objects and events that FSDb can recognize and respond to, and the objects and events within the corpus of the training data. If there is sufficient training data covering a type of object or a type of event, V12 will eventually learn to correlate the sensor data encompassing those items with an appropriate response.

That is certainly the theory.

But we've already seen examples from V12 videos of it still not having a complete OEDR.

And it was also the theory a couple years ago when they did the "total rewrite" that changed the perception NNs from using still frames to using video that they'd have perception 'solved" back then and it still isn't.

As I suggest, it's possible they will be able to produce once eventually with V12. Or they might not, either out of HW limits or other issues.

But today the OEDR remains incomplete, so it's incapable of >L2 operation-- still exactly in line with what Tesla wrote in the DMV emails.
 
V12 is still teething and with many of the same FSDb issues along with some new ones - suboptimal camera locations, marginal computer hw, increased demand on what was previously challenged training, and in general much less safe than an average attentive human driver. Few customers and no OEMs are interested in buying/licensing FSDb as is.

Waymo has done everything they wanted to do. The current design isn't intended for OEMs and they certainly wouldn't market it as such. But I wouldn't underestimate Waymo's ability to reduce their system cost and provide an OEM compatible L2 design.
 
This is about half accurate---

Tesla has been using NNs for perception for YEARS. Karpathy and others have shown tons of videos and given presentations on how they've refined that over the years, but it's been the case since well BEFORE those DMV emails.

It was the planning and execution that was hard coded and is moved to NNs in V12.



So there's no reason if it couldn't recognize a school bus in V11 it should magically be able to in V12- it's using NNs in both cases to do so. There may be reasons around how it responds to one that differs--- but the fact they never added "stop for a school bus with a stop sign out" as a hard rule suggests perception was...less than ideal... at recognizing that and again nothing has changed in that regard for V12 (other than arguably HW4 where the cameras are higher res)

Even if Tesla is using the exact same perception architecture as V11, the critical change is that in V11, Tesla needed to collapse what is most likely a very high-dimensional perception output down into a human-understandable space (there is a car at X, Y) for the planning and execution code to act upon. They were essentially throwing away all the bits that they had not yet anticipated to be applicable to the hand-coded planning stack. I very much doubt that Tesla is still reducing the dimensionality of the perception module of the V12 stack in the context of an end-to-end network.

The example you dismiss as magical is absolutely what V12 is theoretically capable of. Where before, the perception stack would be responsible for identifying a school bus, identifying the stop-sign off the side of it, and identifying when that sign is active, in V12 it no longer needs any of that abstraction. As long as there are enough examples in the training data of the brake being applied when a school bus stop sign is deployed, and the accelerator being applied when it's folded, then the system will learn to do the same.

Take V12 responding to these road markings for example. We know that the V11 perception stack had not previously been programmed to read "Keep Clear" road markings. And the V11 planning/execution stack definitely was incapable of responding to them. And yet V12 has already learned the appropriate behavior upon seeing "Keep Clear" printed on the road in an intersection.

The high-level theory of V12 is that it will be capable of compressing the entirety of knowledge required to operate a car down into the weights of the network. Whether that's actually feasible given the fidelity of the cameras, the placement of the cameras, and the on-board processing power remains to be seen.

EDIT: I just wanted to add some additional thoughts about neural networks as compression algorithms. In the example above, V12 probably is not actually reading the words on the road. At a basic level of training, it's learned to correlate those rough shapes of the entire words with that behavior. But the demands of the limited number of parameters mean that it will move toward the most efficient way of storing that knowledge. And it might not be possible to store all possible different street-marking shapes, so it might eventually, and with no prompting by the human engineers, begin to learn how to read.
 
Last edited:
Even if Tesla is using the exact same perception architecture as V11, the critical change is that in V11, Tesla needed to collapse what is most likely a very high-dimensional perception output down into a human-understandable space (there is a car at X, Y) for the planning and execution code to act upon. They were essentially throwing away all the bits that they had not yet anticipated to be applicable to the hand-coded planning stack. I very much doubt that Tesla is still reducing the dimensionality of the perception module of the V12 stack in the context of an end-to-end network.

Without us having more detail about the nature of end to end (and we don't, and it's already been covered that phrase can mean several, quite different, things) we can't be sure they're retaining more perception info than before. It's certainly possible they are though, I don't disagree with that.

But they're not currently USING all that additionally-retained data if true. (not currently != can't ever of course)


The example you dismiss as magical is absolutely what V12 is theoretically capable of. Where before, the perception stack would be responsible for identifying a school bus, identifying the stop-sign off the side of it, and identifying when that sign is active, in V12 it no longer needs any of that abstraction. As long as there are enough examples in the training data of the brake being applied when a school bus stop sign is deployed, and the accelerator being applied when it's folded, then the system will learn to do the same.

Theoretically? Sure. I even said so.

Today? No. Which is why it continues to be silly to compare TODAYs system to Waymo. They're fundamentally different things with fundamentally different capabilities and scopes of use.

FSD today is not an L4 (or even L3) system that works anywhere that Tesla is just calling L2 for liability reasons

It remains inherently an L2 system that lacks required capabilities to operate higher.

It's POSSIBLE that V12 will eventually change that-- when it does then it's worth discussing vs Waymos system.

Today it instead leads to nonsense like trying to discuss what roads FSD can "drive itself" on when that # is still 0.




The high-level theory of V12 is that it will be capable of compressing the entirety of knowledge required to operate a car down into the weights of the network. Whether that's actually feasible given the fidelity of the cameras, the placement of the cameras, and the on-board processing power remains to be seen.


Again- no disagreement with any of that.

But the Waymo comparisons recently kept pretending all that is actual, not theoretical, and if Elon says "no longer beta" tomorrow then magically we have robotaxis on 4.2 million miles of road or whatever.... and that just ain't so.
 
  • Like
Reactions: willow_hiller
Waymo can be driverless and still be worse at driving than Tesla FSD L2

It's all semantics


It's fundamentally not (moderator edit)

FSD L2 can not drive at all-- that is inherent to the definition of L2

It can only assist a human who is themselves driving. As Tesla themselves tells you, even in the most current version.

That is not semantics.
 
Last edited by a moderator:
  • Like
Reactions: spacecoin
MAYBE those things eventually get added in V12- or maybe they never do- but they aren't there now (and sure aren't there in anything pre 12).
Tesla told regulators that FSD Beta would continue to be incapable of recognizing and responding to construction zones, emergency vehicles, adverse weather, etc., and that's why they are not subject to regulations for autonomous features. It seems like 12.x already has some of that capability, so at what point will regulators decide that FSD Beta now actually is capable? Could Tesla still argue that they did not explicitly design it to handle certain situations but happens to do the right thing in many cases?

Or perhaps something like the current "poor weather detected" message is good enough to keep it as a driver assist to avoid reporting requirements? Even 11.x happens to drive decently well when that message shows up, and presumably 12.x will do even better with more training.
 
  • Like
  • Informative
Reactions: Usain and JB47394
When did I say anything about the SAE levels?

Literally in your last post. Bold added.

Waymo can be driverless and still be worse at driving than Tesla FSD L2

L2 is an SAE level.

It defines, in part, who and what is driving the car (spoiler: it's the human)

So is it you don't recall what's in your own posts, or you don't understand the words you yourself use?

Drive, as in who/what is driving, also has legal definitions that also are not semantics.





Tesla told regulators that FSD Beta would continue to be incapable of recognizing and responding to construction zones, emergency vehicles, adverse weather, etc., and that's why they are not subject to regulations for autonomous features. It seems like 12.x already has some of that capability, so at what point will regulators decide that FSD Beta now actually is capable? Could Tesla still argue that they did not explicitly design it to handle certain situations but happens to do the right thing in many cases?

Or perhaps something like the current "poor weather detected" message is good enough to keep it as a driver assist to avoid reporting requirements? Even 11.x happens to drive decently well when that message shows up, and presumably 12.x will do even better with more training.

Incomplete OEDR is L2, not subject to autonomous regulation, by definition- so adding "better but still incomplete" OEDR doesn't change anything.

A complete OEDR but limiting when it can work (like weather) is an ODD-- which does NOT restrict you to lower SAE levels-- Waymo is L4, which is autonomous driving within a specific ODD and a human is never required.

Again, FSD does not ever drive the car, it assists a human driver. This is not semantics, it's a fundamental difference in capability and design intent (and the actual law).


Forbes has a nice bit on OEDR and the levels here for anyone actually interested in understanding this instead of dismissing fundamentally important differences:

 
  • Like
Reactions: spacecoin
Tesla told regulators that FSD Beta would continue to be incapable of recognizing and responding to construction zones, emergency vehicles, adverse weather, etc., and that's why they are not subject to regulations for autonomous features. It seems like 12.x already has some of that capability, so at what point will regulators decide that FSD Beta now actually is capable? Could Tesla still argue that they did not explicitly design it to handle certain situations but happens to do the right thing in many cases?

Or perhaps something like the current "poor weather detected" message is good enough to keep it as a driver assist to avoid reporting requirements? Even 11.x happens to drive decently well when that message shows up, and presumably 12.x will do even better with more training.

Tesla will say or do whatever they can to avoid being more regulated (without lying). It's not surprising at all.
 
Literally in your last post. Bold added.

L2 is an SAE level.

It defines, in part, who and what is driving the car (spoiler: it's the human)

So is it you don't recall what's in your own posts, or you don't understand the words you yourself use?

Drive, as in who/what is driving, also has legal definitions that also are not semantics.
All right. Semantics. Let's get down to basics.

One poster said, "My car can drive around the block. A Waymo can't."

You said, or you're attempting to say, "The car didn't drive around the block. It's not even Level 2! You drove around the block!"

My dear departed mother, who had a serious brain on her, had this habit, when she was losing an argument (which, granted, didn't happen often) of changing the definitions of things in the middle of the argument so that Her Argument Would Win.

And whether she was being consciously aware of what she was doing or was just being sneaky, once she got away with it, unrolling the argument to where she pulled this stunt was often impossible (what with her trying to impede the rollback and all), leaving other people infuriated and her smug. Living in that household, us five kids got really good at calling out this stunt when it would appear.

You, sir, have done that stunt. Yeah, legally, when one is sitting in a Tesla, no matter what's turned on or turned off, it's the person in the driver's seat who's the legally responsible party for making the car go wherever it's going to go. And black is white and white is black and we'll all get killed at the next zebra crossing.

Because when FSD-b is turned on and I'm hanging onto the steering wheel with my foot off the pedals, in a completely English sense of the word, I'm not driving. The car is. If the car does something stupid, which happens, then it's my job to take over the driving.

In responding to this rhetorical trick of yours I do feel like I've lost a few brain cells. I don't mind your debating things, but please don't do that? It just infuriates people.