Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

10.8 FSD

This site may earn commission on affiliate links.
$12,000 for FSD ($7,000 for me in 2017) is a lot of money. Contemplating how poorly Enhanced Auto Pilot (EAP) performed lane keeping at the time; there were few of us who believed Level 3 was ever going to be possible.

The leap to FSD was aspirational, it was based on hope, not certainty. We knew the gamble.

The Tesla FSD dream also includes the unimaginable possibility the hardware and software could be upgraded (some of it for free) (for four years and counting) after buying the car...mind blowing.

I fail to understand how a class action lawsuit about a perceived promise of something we know is aspirational, could possibly fly. If I wanted perfect performance out all 58 features I checked, and nothing more, I would go buy a BMW.

Anything above Level 2 is a fantasy. FSD Beta lets me live in that fantasy. For many, that is worth $12,000.
 
"Reliability far in excess of human drivers"<-------- This is not L2. Not even close

Actually, it can totally be L2. The thing about these levels of automation is that an L2 system can outperform an L3,4,5 system in reliability and/or capability, but still be L2 because the manufacturer stipulates that the driver needs to pay attention and assume responsibility. SAE doesn't say WHY the driver needs to pay attention. It's not tied to capability or reliability.

Mercedes has a L3 system that only works on highways and under 37mph. Basically traffic jams. I would hardly call this system capable given its narrow scope. Tesla's been pretty flawless in highway traffic jams since forever and could likely go the same path as Mercedes and declare L3 in that limited scope. But for what purpose? Reminds me of the camera megapixel wars of the last decade. My L3 is bigger than your L2.

Waymo could become a highly reliable L4 system: full autonomy but geofenced to urban centers. But for a suburban driver, a less reliable but more capable (not geofenced) system might be more desirable. Another instance of L2 > L4 depending on use case.

SAE levels also doesn't really quantify reliability AFAIK. A "feature complete" system basically won't ever give up and stop because it should know how to operate any situation. This is capability. By SAE's own terms for L5: "this feature can drive the vehicle under all conditions." But it does not describe how WELL the car performs those capabilities. Take rotaries with FSD beta. The car CAN drive through them. But today without interventions, it likely will wreck quite a bit or curb the tires, etc. By my interpretation, this would still pass as L5. Just a very poorly-performing L5.

I think this is the big disconnect between what Musk says and what people think. Musk is talking about capability. Most consumers think about capability but also assume high reliability. Where a company draws the line between human and car is somewhat arbitrary on the reliability scale. Even the best systems will still get into accidents, even if they have "reliability far in excess of human drivers," so you can always slap a "human must pay attention" disclaimer to your system, limiting it to L2.

Since FSD still gets paralyzed doing turns, can't handle dead ends, can't do 3-point turns, parking lots, driveways, etc, it is not yet feature complete and therefore cannot be L4/5. If Musk says L4/5 in a year, then that likely means feature-complete (capability), and from there, there's still a lot of work to improve reliability (march of 9s).


1641695770728.png
 
And were they provided? And free of charge? I haven't looked through the forum yet.

Edit: Did a little reading but didn't come across what FSD feature(s) one camera type supported vs the other.




If what we were getting was L2, they could have stopped after the first sentence:

"The currently enabled features require active driver supervision and do not make the vehicle autonomous.: <------This is L2.

"Reliability far in excess of human drivers"<-------- This is not L2. Not even close

Sure it is.

Check out the actual real life stats on accident rates.

L2 autopilot has a much lower accident rate than just human drivers.

Tesla publishes these numbers quarterly.


Reliability has nothing to do with SAE level of course- as Novox already covered quite well.



If Tesla didn't intend to lower the SAE level of function they were promising during the sale of FSD they wouldn't have literally changed the wording to lower the level of function being promised during the sale of FSD.

Again- this doesn't mean they don't still WANT to deliver L4+ to everyone it just means they're legally obligated to refund a lot fewer people if it turns out they can't.
 
They didn't give a timeline, so when would they EVER be required to refund monies?


Because they did promise to eventually deliver a product.

Speaking as -not legal advice- there's a reasonable person standard that would be applied as in in most civil cases like this....and "delay in excess of the useful life of the product" at a minimum would be a likely reasonable standard after which they'd need to refund you if they've failed to deliver.

What useful life of the product is would be for the jury to decide. 12.1 years is roughly the average age of used cars in the US, so one could argue for that. Average length of new car ownership is more like 7 years- so one could argue for that too.


At only a bit over 5 years since introduction (late Nov 2016) we're not there yet. But might be in the long run.


At which point what they'd legally owe pre and post 3/19 buyers would differ substantially. If they deliver the current city streets functionality, at L2, legally that's all post 3/19 buyers were ever actually promised during their purchase process.

(as I say, I fully believe Tesla intended to deliver MORE than they promised them, if they're able to actually get it working... but that's really a different line of discussion than if it turns out they can't).

Whereas pre 3/19 buyers would not have had their obligation from Tesla met at all by an L2 city streets system.
 
  • Like
Reactions: pilotSteve
I bought when EAP and FSD were separate products so for $3000 I added FSD to the car, not to get an automatic car, but to push for a day when there will be one, putting my money where my mouth is. But getting the upgraded computer put in and now the beta has been the best.

Even the day I got my TM3 I thought "If Tesla goes away today, I'll still have this great car that I can still charge at home" but it's been a straight line from there to here and I am a very happy customer.
 
  • Like
Reactions: FSDtester#1
So am I....best car I've ever owned... though the beta has given me a deeper idea of how far away L4 really is (esp. with the current sensor suite and computer).

The fact Tesla promised to upgrade HW for free for FSD owners as needed to maintain "all the hardware for FSD" promise has been great, and one of the reasons I, like you, paid $3000 for it on top of EAP.... (they'd publicly stated this prior even to my purchase, back when the existence of HW2.5 became known at the launch of the Model 3).

My own assessment at the time (and my opinion hasn't really changed, if anything the beta has reinforced it) is that the best they're gonna do without significant sensor upgrades (no, not lidar, but more, better, cameras, in better locations and more resistance to weather blindness) is L3 highway-only. I would personally be fine having only received that for my 3k.

None of that changes the fact that if they fail to deliver L4 by the end of the useful life of these vehicles they'd likely have a legal obligation owing $3000 refunds to such folks since they'd failed to deliver what they sold them.

And none of it changes the fact there's a reason they changed that promise to only explicitly promise an L2 version of city streets back in March 2019- thus drastically limiting the amount of future liability if they ever realize they can't reasonable deliver more on this platform.
 
Sure it is.

Check out the actual real life stats on accident rates.

L2 autopilot has a much lower accident rate than just human drivers.

Tesla publishes these numbers quarterly.


Reliability has nothing to do with SAE level of course- as Novox already covered quite well.



If Tesla didn't intend to lower the SAE level of function they were promising during the sale of FSD they wouldn't have literally changed the wording to lower the level of function being promised during the sale of FSD.

Again- this doesn't mean they don't still WANT to deliver L4+ to everyone it just means they're legally obligated to refund a lot fewer people if it turns out they can't.
You're grasping at straws here. Is autopilot's reliability far in excess of a human driver? Objectively not. Failing to stop for objects in the way, phantom braking, missing or taking the wrong exit, taking too long to change lanes, or changing lanes too aggressively, etc are all issues that still exist and are not that difficult to replicate. I don't use it when the freeway is crowded (except stop-n-go). Nor do I use the auto lane change feature on the route to work. I simply cannot trust it with the aggressive drivers we have down here, nor can I rely on it to never phantom brake. I'm sure there are other cities, roads, freeways where it works quite well, but it should work well everywhere and in every typical situation if it wants to claim better-than-human reliability. I've used it on road trips on roads with average to light traffic and have been happy with it. I've also used it in the city and have been impressed at times and let down at times. You'll see previous posts from me saying how great it is, but that wears off with time. I need it to be reliable. L3 or L4 can come when they come, if ever. Give me reliability.

And nowhere in that description did they mention freeway only or a specific circumstance. It also sits under "Autosteer on city streets". Are you saying that FSD in its current form is safer than a human on city streets?

The last time I got in an accident that was my fault was about 12 years ago (knock on wood). I'm not outright doubting their accident data but would need to scrutinize it. I'm sure autopilot could help prevent an accident in the case of a distracted, sleepy, drunk, or otherwise incapacitated driver, but it has proven that it's not always reliable in even these situations. And these are fairly low bars to meet.



Actually, it can totally be L2. The thing about these levels of automation is that an L2 system can outperform an L3,4,5 system in reliability and/or capability, but still be L2 because the manufacturer stipulates that the driver needs to pay attention and assume responsibility. SAE doesn't say WHY the driver needs to pay attention. It's not tied to capability or reliability.

The SAE levels are directly tied to capability and reliability. If a car is capable of driving cross country (without a driver behind the wheel) reliably, except maybe to stop safely and let the driver know that it can't continue (maybe it can't find another route or way around an object), that system is demonstrating L4 automation at the very least. It meets the SAE definition regardless of what the manufacturer stipulates. Just like if a manufacturer claims their cars can do L4 (even geofenced) but in reality, a driver is needed to prevent possible interventions, then it's simply not L4 whether they claim it or not.

Autopilot / FSD as it stands right now, is not yet capable of L3 or L4 automation period, and per the SEA definition. Even on the freeway. That's why a driver a needed.
 
Last edited:
None of that changes the fact that if they fail to deliver L4 by the end of the useful life of these vehicles they'd likely have a legal obligation owing $3000 refunds to such folks since they'd failed to deliver what they sold them.

And none of it changes the fact there's a reason they changed that promise to only explicitly promise an L2 version of city streets back in March 2019- thus drastically limiting the amount of future liability if they ever realize they can't reasonable deliver more on this platform.
As I wrote before, they can face a class action claiming deceptive and misleading advertising with the plaintiffs having a good chance (IMO) of prevailing given the heap of evidence (website and Elon's own words) that exist. Remember Robotaxis?

I doubt any of this will happen as they've clearly demonstrated they're moving forward with updates and improvements and not just sitting around. I like their current pace, but a little disappointed in the current robustness of the system.

Still, even with FSD and its faults, this car is a blast to drive and I'd buy another one tomorrow. Even with the EV market heating up, it would take a lot for me to jump ship.
 
  • Like
Reactions: FSDtester#1
Recent FSD fun. I have been wondering whether FSD recognizes and treats stopped school buses correctly. I finally got behind a school bus one morning. The bus stopped in the middle of a residential road with its lights flashing. FSD immediately cranked the wheel and started trying to pass the bus on the right!

As I was prepared for this scenario, I immediately hit the brake as soon as the car began accelerating, so no danger. Another FSD limitation to be prepared for.
 
He didn’t say L4, he said the trend in improvements in intervention numbers show FSD will likely be better than humans this year. L4 was the question Lex asked - but Elon answered in his own way.
Thanks for the correction. My bad for getting my information from another who claimed he said "Level 4." That interview was long but I really need to take the time to listen through the whole thing. I should know better than to trust blindly what people say on these forums. Anyway, I still will be happy with Level 3.

On another more immanent claim-- I sure hope when they co to single stack Tesla doesn't screw up what we now have with FSD NOA on the highways. I am very pleased with how well that works now. I can go for hours in AP and only have to keep some tension on the steering wheel. Truly worth what I paid for FSD upgrade to AP.
 
Most consumers think about capability but also assume high reliability.
Didn't Musk claim his idea of reliability is "the march of 9's." I believe I did hear him define that as "better than 99999 miles out of 100,000 miles perfect FSD execution." Current FSD beta can't go 5 miles on my test drive route and not require me to take control 10-50 times. Some days it just can't do anything right and other days the same route is near flawless. Yet I can do a perfect 5 miles down US1 with no turns handling a dozen traffic lights and it does the drive 100%.
 
  • Like
Reactions: Sporty
Didn't Musk claim his idea of reliability is "the march of 9's." I believe I did hear him define that as "better than 99999 miles out of 100,000 miles perfect FSD execution." Current FSD beta can't go 5 miles on my test drive route and not require me to take control 10-50 times. Some days it just can't do anything right and other days the same route is near flawless. Yet I can do a perfect 5 miles down US1 with no turns handling a dozen traffic lights and it does the drive 100%.

Yep, I mentioned that in my post. We haven't hit feature complete yet, so logically, we haven't really started improving the long tail of edge cases yet.
 
  • Like
Reactions: FSDtester#1
The SAE levels are directly tied to capability and reliability. If a car is capable of driving cross country (without a driver behind the wheel) reliably, except maybe to stop safely and let the driver know that it can't continue (maybe it can't find another route or way around an object), that system is demonstrating L4 automation at the very least. It meets the SAE definition regardless of what the manufacturer stipulates. Just like if a manufacturer claims their cars can do L4 (even geofenced) but in reality, a driver is needed to prevent possible interventions, then it's simply not L4 whether they claim it or not.

Autopilot / FSD as it stands right now, is not yet capable of L3 or L4 automation period, and per the SEA definition. Even on the freeway. That's why a driver a needed.

Your assumption here is that a car that has the capability to drive the car will 100% never get into an accident, which is an impossibility. An L4/5 system might have a very low rate of accident, way below the rate with humans, but if the manufacturer does not take responsibility for those accidents, they just need to declare that a human is responsible and must be ready to intervene. My point is that the SAE levels do not quantitatively define what is considered reliable enough, in other words, X number of accidents per Y autonomous miles driven. As with my Mercedes example, if their L3 cars gets into accidents on the highway below 37mph more frequently than classic AP, it is still L3, just not as competent as a competing L2 system that has a lower rate of accidents and can handle more situations.

Your declaration that FSD cannot be L3 or L4 now is correct but for the wrong reason. There will be a day when the number of interventions is small, but not zero. Is it then L3/4? Where do you draw the line when SAE didn't define this?

Unless there's a more explicit definition of the SAE levels somewhere, I don't see much utility in using them to judge an AV system's capability or reliability. As proof, there are countless other sources citing the SAE levels, and they each seem to have to apply some interpretation of what the levels actually mean. This forced the SAE to recently create the infographic I posted to try to remove confusion. But it's still very general and not quantitative.
 
You're grasping at straws here. Is autopilot's reliability far in excess of a human driver? Objectively not.


Objectively yes


Tesla said:
we recorded one crash for every 4.41 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.2 million miles driven

So using autopilot is almost 4 times safer than just having the human drive.

You appear to be the straw guy in this example I'm afraid.




The SAE levels are directly tied to capability and reliability. I

No, they explicitly are not tied to reliability.

They are tied to design intent and capability. NOT reliability.

I think you need to go back and re-read the actual J3016 doc.

But if you're convinced there's some imaginary "X times more reliable than a human" thing in there- please quote it to us.


The "X better than a human" thing isn't in the SAE docs at all- it's entirely a Tesla term.

The capability measure that is in there is specifically what Tesla addressed in their CA DMV emails about city streets (FSDBeta)- pointing out the design intent was an L2 system, and that the systems ODR lacked the capability for higher levels of driving.

The nearest SAE gets to "needs to be reliable" for L3 or higher is they require the feature to be able to perform the dynamic driving task for a "sustained" period. Which they define as:

J3016 said:
3.26 SUSTAINED [OPERATION OF A VEHICLE]
Performance of part or all of the DDT both between and across external events, including responding to external events and continuing performance of part or all of the DDT in the absence of external events.
NOTE 1: External events are situations in the driving environment that necessitate a response by a driver or driving automation system (e.g., other vehicles, lane markings, traffic signs


So it needs to be able to react to something external that a human would need to react to to qualify as "sustained"

That does not mean it needs to always and reliably respond to every situation perfectly every time


They go on to give examples of systems that aren't "sustained"- dumb cruise control, because it doesn't respond to outside things- so it's level 0.... and ABS, since it only does momentary intervention in a specific direction (lateral or longitudinal vehicle control- but not any part of the actual dynamic driving task)


What keeps FSDBeta from being higher than L2, again as Tesla themselves describes in the CA DMV stuff, is it lacks the ODR capability to do so.

Among the things required to do the DDT (and L3 or higher must be able to do the DDT at least SOME of the time)


SAE J3016 said:
Monitoring the driving environment via object and event detection, recognition, classification, and response preparation


Go re-read the CA DMV stuff where Tesla explains its ODR is not capable of this, and there's no INTENT to make it capable of this in the city streets code.


There'll be some future software that IS designed with this intent- but FSDBeta ain't it.




Autopilot / FSD as it stands right now, is not yet capable of L3 or L4 automation period, and per the SEA definition. Even on the freeway. That's why a driver a needed.

Yes- and Tesla explicitly told us that in the CA DMV emails. The current system design and intent is L2.

They also tell us that explicitly in the description of FSD during purchase so I'm unsure what you're even arguing about here.


In contrast the pre-march-19 description of FSD during the purchase told us they were selling us a system that'd eventually be L4.


So again, they changed the description of the product from one that'd offer L4 eventually to one that promised no more than L2 back in 2019.


I believe they still WANT to deliver L4 to everyone-- but thanks to that change they've greatly limited their legal liability if it turns out they can't.
 
Last edited:
  • Like
Reactions: FSDtester#1
I don't find the Autopilot accident stats any more compelling than GM's claim that Super Cruise had zero accidents in 5.7million miles of use, and I'm sure people who work in the industry recognize that far more granularity is required to do a real assessment.

There are so many potential factors and influences, a single number with no context or detail seems all but meaningless. At least when we're talking about specific brands using specific systems, it makes sense when you're looking at all miles driven across a country.
 
Last edited by a moderator:
Objectively yes
So using autopilot is almost 4 times safer than just having the human drive.
People use autopilot more on the freeway than in the city where more accidents tend to happen, so the data needs more scrutiny. And autopilot certainly is not more reliable in the city.
So it needs to be able to react to something external that a human would need to react to to qualify as "sustained"

That does not mean it needs to always and reliably respond to every situation perfectly every time
No one said it has to be perfect every time, but it has to always respond in an L4 or L5 system. That's what I meant by reliable. In other words, the passenger(s) in an L4 or 5 system has to rely on the system to respond to every situation. If it can't, then it's not sustained and don't qualify regardless of intent.


Yes- and Tesla explicitly told us that in the CA DMV emails. The current system design and intent is L2.

They also tell us that explicitly in the description of FSD during purchase so I'm unsure what you're even arguing about here
The intent in the description is "achieving reliability far in excess of human drivers" and Elon continually expounds on that by talking about L4, 5 and services like Robotaxis. Now, if you can show me a Robotaxi that's L2, I'd love see it.
 
Last edited:
Your assumption here is that a car that has the capability to drive the car will 100% never get into an accident, which is an impossibility. An L4/5 system might have a very low rate of accident, way below the rate with humans, but if the manufacturer does not take responsibility for those accidents, they just need to declare that a human is responsible and must be ready to intervene. My point is that the SAE levels do not quantitatively define what is considered reliable enough, in other words, X number of accidents per Y autonomous miles driven. As with my Mercedes example, if their L3 cars gets into accidents on the highway below 37mph more frequently than classic AP, it is still L3, just not as competent as a competing L2 system that has a lower rate of accidents and can handle more situations.

Your declaration that FSD cannot be L3 or L4 now is correct but for the wrong reason. There will be a day when the number of interventions is small, but not zero. Is it then L3/4? Where do you draw the line when SAE didn't define this?

Unless there's a more explicit definition of the SAE levels somewhere, I don't see much utility in using them to judge an AV system's capability or reliability. As proof, there are countless other sources citing the SAE levels, and they each seem to have to apply some interpretation of what the levels actually mean. This forced the SAE to recently create the infographic I posted to try to remove confusion. But it's still very general and not quantitative.
There's a misunderstanding here, and it's partly my fault. When I said 'reliable' with regards to the SAE, I did not mean 'perfect' or 'without incident'. What I meant was, operating as it should or expected to. In other words, a passenger should expect an L4 or 5 to respond to a typical driving situation every time. It's reliably doing what's it's supposed to: respond to external events as a driver does. And a driving situation could simply mean it can't continue as in the case of L4 where I would expect it to stop safely.

Safety is somewhat of a different matter, although I believe a certain level of safety is understood and expected. So, IMO, if an L4 or 5 system is behaving reliably, it should be safer than if it's not. As for Tesla, they expect reliability 'far in excess of a human driver'. Here I believe they're conflating safety with reliability.
 
People use autopilot more on the freeway than in the city where more accidents tend to happen, so the data needs more scrutiny. And autopilot certainly is not more reliable in the city.

Autopilot is only intended to be used on the highway in the first place.

So within it's operational domain, it makes the car safer than a human alone.


No one said it has to be perfect every time, but it has to always respond in an L4 or L5 system. That's what I meant by reliable. In other words, the passenger(s) in an L4 or 5 system has to rely on the system to respond to every situation. If it can't, then it's not sustained and don't qualify regardless of intent.

Can you cite the part of the SAE doc where it says it ALWAYS has to respond? Because "always" and "sustained" seem pretty far apart by my reading of the doc, but perhaps I'm missing some text you can point out in J3016 for us?



The intent in the description is "achieving reliability far in excess of human drivers"

Which isn't at all an SAE definition or term

So bringing it up in relation to SAE levels again suggests you haven't read J3016 at all.


You appear to confusing Tesla marketing terms with the actual definitions by the SAE.
 
Autopilot is only intended to be used on the highway in the first place.

So within it's operational domain, it makes the car safer than a human alone.
Per Tesla's methodology section (autopilot safety data site):
"
In practice, this correlates to nearly any crash at about 12 mph (20 kph) or above, depending on the crash forces generated."

So, it's pretty clear they're counting crashes that can occur almost anywhere based on that minimum speed, even in places people tend not to use autopilot.

Can you cite the part of the SAE doc where it says it ALWAYS has to respond? Because "always" and "sustained" seem pretty far apart by my reading of the doc, but perhaps I'm missing some text you can point out in J3016 for us
Per SAE definition: "these automated driving features will not require you to take over driving" referring to L4 and L5. So, operation of the vehicle by the automation system has to be sustained from start to end and it cannot do that without responding to every typical driving situation. A response to a situation doesn't necessarily mean that it 'solves' or 'figures it out'. For example, if it comes up a construction zone and it doesn't see a way to get around it (as with Waymo), it can ask for help from an assistance team, and then it proceeds on its own. Another example is hand gestures. If it doesn't understand a cop's gestures for example, it can ask for help. However, it is still doing the driving.

Which isn't at all an SAE definition or term

So bringing it up in relation to SAE levels again suggests you haven't read J3016 at all.


You appear to confusing Tesla marketing terms with the actual definitions by the SAE.
If Elon, the man himself, thinks they'll solve Level 4 by the end of this year, what more clarity is needed?