Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Major Problem With FSD That Tesla Won’t Acknlowledge

This site may earn commission on affiliate links.

TL;dr - Apple team wanted iPhone to be plastic because it will drop often and break if made of glass. But plastic got micro scratches over time. Steve Jobs said “if it’s plastic and gets scratches it’s our fault. If it’s glass and breaks it’s their fault. They will accept that more.”

Now imagine where “fault” can mean someone dies.

Elon talks about how FSD is x times safer than a human driver. Even if you take away all of the controversy surrounding the context of what constitutes those miles, that fundamentally doesn’t matter at the end. What matters is “will Tesla feel confident enough to take responsibility?” When you drive a car and get into a crash, whether you’re distracted or drunk or something, that’s your fault. If you die because of something wrong with a driverless car, that’s the car’s fault. It takes control and culpability away from the customer, which brings a whole different level of scrutiny.

For comparison, you have a higher statistical chance of dying from walking on a sidewalk than an airplane.

Tesla for years has talked about vision as being the core problem. Now it’s openly admitted to moving toward decision making through neural nets. Then eventually the talk will be about the March of 9s. But at the end of the road, there’s “is tesla willing to take legal liability for a robotaxi?”

It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.
 
I consider this absolutely ridiculous given the fact that you are still in control of the vehicle, and that it requires a driver at all times. As others have said, it’s absolutely a moot point given that this is truly semi autonomous driving not autonomous driving. The people from the Super Bowl commercial or a bunch of scumbags simply trying to damage Tesla; they bring nothing useful to the conversation because they are anti-self driving in any capacity. The assumption that any level to system will discriminate in such a manner that it will not require any driver intervention at all is utterly ridiculous.
Semi -autonomous? Did you buy Semi Self Driving?
I’m not a scumbag trying to damage Tesla.
I bought the FSD knowing it wasn’t ready and I‘m using it as-is. That doesn’t change the definition of “Full”.
Tesla’s FSD marketing has been deliberately misleading.
 
  • Like
Reactions: pilotSteve
Semi -autonomous? Did you buy Semi Self Driving?
I’m not a scumbag trying to damage Tesla.
I bought the FSD knowing it wasn’t ready and I‘m using it as-is. That doesn’t change the definition of “Full”.
Tesla’s FSD marketing has been deliberately misleading.
Actually, I realistically bought a driving system that I didn’t care is the whether or not it was full self driving nor was I naïve enough to think that it would be full self driving.

And I don’t care whether it’s “full self driving “in the AI autonomous sense…

All that matter to me was that it drove better than the eyesight system of the Subaru outback that I previously owned, and I had done my research to know what I was getting into.
 
Why call it self drive and then double down with FULL..
call it assitant or something else.
it's easy to automate 80% of routine driving.. the rest of the 20% of exceptions/edge cases will take forever as it takes time to learn and tweak.
branding the product based on future final hypothetical state than what it is.. and burying disclaimers in pages/pages is going to catch-up at somepoint.
 
Has anyone figured out how FSD is going to operate on snow-covered roads?
I doubt it. Here's a quote from the attached YT video showing 11.3.1 challenged by light/moderate rain.

wreckinball11 1 day ago

It rains 9 months of the year in Washington state and I live in a rural area with no street lamps. FSD only allows me to use it 60% of the time. Elon said beta can see really well in the dark but no effort has been put into dark and rain in my 18 months of testing.

 
  • Like
Reactions: Gamsberry
You don't apply the SAE levels subjectively based on how well the car typically does if you were to hypothetically not pay attention. FSD requires drivers to pay attention. By definition, it's L2. There are no wishy washy fractional levels like L3.4 or whatever.

FSD can be L3 tomorrow if Tesla decides on an ODD where the driver can tune out. Whether they'll ever do that is unknowable. I personally consider it unlikely. They will keep FSD L2 while constantly improving its capabilities.
The conventional application of the scale of autonomy is relative. A TESLA with FSDb may rank L3 tomorrow while falling short of L4. The next day, it may fail both. Again, it’s relative. While we agree that the convention of autonomous driving is not a sliding scale, as to actual performance and a way to gauge success of the program overall, we can apply estimated L3.4 or whatever your opinion is. (My opinion is that it is at L3.4.)

I also don’t lend a lot of credence to that convention because it is flawed. It calls a vehicle with no dumb cruise control, etc., L0. Really? L0??? A car that has zero autonomy technology doesn’t even rank on the conventional scale at all, so the scale should start at L1, not L0. Also, it should be a measure of an automobile’s ability to navigate and drive anywhere fully autonomously versus the degree to which a human driver needs to intervene. For example, by convention L4 is considered fully autonomous (albeit within a defined limited zone). Well, even if Waymo can drive autonomously around parts of Phoenix without human intervention, then that is not fully autonomous because the vehicle cannot drive anywhere, regardless of human interventions. Autonomy means two things: A self-driven vehicle that requires ZERO human intervention AND a vehicle that can self-navigate (and self-drive) anywhere.
 

TL;dr - Apple team wanted iPhone to be plastic because it will drop often and break if made of glass. But plastic got micro scratches over time. Steve Jobs said “if it’s plastic and gets scratches it’s our fault. If it’s glass and breaks it’s their fault. They will accept that more.”

Now imagine where “fault” can mean someone dies.

Elon talks about how FSD is x times safer than a human driver. Even if you take away all of the controversy surrounding the context of what constitutes those miles, that fundamentally doesn’t matter at the end. What matters is “will Tesla feel confident enough to take responsibility?” When you drive a car and get into a crash, whether you’re distracted or drunk or something, that’s your fault. If you die because of something wrong with a driverless car, that’s the car’s fault. It takes control and culpability away from the customer, which brings a whole different level of scrutiny.

For comparison, you have a higher statistical chance of dying from walking on a sidewalk than an airplane.

Tesla for years has talked about vision as being the core problem. Now it’s openly admitted to moving toward decision making through neural nets. Then eventually the talk will be about the March of 9s. But at the end of the road, there’s “is tesla willing to take legal liability for a robotaxi?”

It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.

Fsd software needs to get into crashes less than 1 in every 200,000 miles driven to be safer than a human. To be 10x safer; Elons goal, fsd needs to go about 1.5 millon miles before an accident.
 
It probably does that already. Autopilot is at 1 crash every 4.8 million driven miles and FSD is probably racking up quite a few. Of course the sting is to ask how many crashes would take place if the system wasn’t being closely monitored and the human driver didn’t intervene.

Really to be worth anything that statistic needs to be limited to intervention free drives, and ideally disengagement free drives because turning FSD off before you enter a situation it can’t handle is a cheat.
 
It probably does that already. Autopilot is at 1 crash every 4.8 million driven miles and FSD is probably racking up quite a few. Of course the sting is to ask how many crashes would take place if the system wasn’t being closely monitored and the human driver didn’t intervene.

Really to be worth anything that statistic needs to be limited to intervention free drives, and ideally disengagement free drives because turning FSD off before you enter a situation it can’t handle is a cheat.
Interesting point.

The wife unit says that all Tesla needs to do is change the name from FSD to something else, and the problem goes away. Especially, given that Tesla is well beyond anyone out there.

The fact remains that it is semi-autonomous driving.
 
  • Like
Reactions: pilotSteve
Interesting point.

The wife unit says that all Tesla needs to do is change the name from FSD to something else, and the problem goes away. Especially, given that Tesla is well beyond anyone out there.

The fact remains that it is semi-autonomous driving.
Yeah, a lot of this issue stems from the fact that Tesla sold something that they (along with everyone else) hadn't built yet on the expectation that they could deliver it and then it turned out to be way harder than expected. If they had been closer to a working product and and marketed it based on what it could actually do I think there'd be a lot less grousing since it is still the best example of the technology anyone has produced.
 
  • Like
Reactions: pilotSteve
It probably does that already. Autopilot is at 1 crash every 4.8 million driven miles and FSD is probably racking up quite a few. Of course the sting is to ask how many crashes would take place if the system wasn’t being closely monitored and the human driver didn’t intervene.

Really to be worth anything that statistic needs to be limited to intervention free drives, and ideally disengagement free drives because turning FSD off before you enter a situation it can’t handle is a cheat.
how do we know it's comparing apples to apples....

we have many millions in driving of which only 400k are in FSD..
you can't compare the small fraction sample.... to total....
also how many of those FSD miles are in the same complex environment as millions of drivers?
Tesla basic auto pilot is only highway or major roads...

tesla is generalizing thier limited findings with rest...

we need to know where/what conditions tesla vechicles drove - and compare that to manual drivers in the same local/same conditions..
there could have been 1000 accidents in snow/rain conditions driving manually.. and tesla fsd/autopilot may not be driving those conditions.. so it's invalid comparision.

tesla is throwing *sugar* on the table and burden of proof is not on them.. but on others.
 
how do we know it's comparing apples to apples....

we have many millions in driving of which only 400k are in FSD..
you can't compare the small fraction sample.... to total....
also how many of those FSD miles are in the same complex environment as millions of drivers?
Tesla basic auto pilot is only highway or major roads...

tesla is generalizing thier limited findings with rest...

we need to know where/what conditions tesla vechicles drove - and compare that to manual drivers in the same local/same conditions..
there could have been 1000 accidents in snow/rain conditions driving manually.. and tesla fsd/autopilot may not be driving those conditions.. so it's invalid comparision.

tesla is throwing *sugar* on the table and burden of proof is not on them.. but on others.

At some point there was a statistic related specifically to FSD and that would need to be worse than average to be on par with general population as it is specifically non-highway driving. Of course there are factors that are not being taken into account to fully analyze the situation but you will never be able to control for all of them and that does not mean the statistic is useless.

Anyway, I'm being stupid being drawn into this conversation as statistic is a very complex topic and even very knowledgeable and experienced people make mistakes so there's no way to have exhausting discussion on this here... I'll try to refrain on further comments...
 
  • Like
Reactions: CHILLIBAJJI
It probably does that already. Autopilot is at 1 crash every 4.8 million driven miles and FSD is probably racking up quite a few. Of course the sting is to ask how many crashes would take place if the system wasn’t being closely monitored and the human driver didn’t intervene.

Really to be worth anything that statistic needs to be limited to intervention free drives, and ideally disengagement free drives because turning FSD off before you enter a situation it can’t handle is a cheat.
Is that apples to apples? Autopilot is used primarily on highways where accidents are significantly less common, not suburbs or cities. The overall data is all miles driven I believe, which includes urban areas. Not saying Teslas aren't safer, they may be, but the number Tesla publishes isn't directly comparable to broad IIHS or DOT data? Fatality Facts 2020: Urban/rural comparison.
 
  • Like
Reactions: CHILLIBAJJI
At some point there was a statistic related specifically to FSD and that would need to be worse than average to be on par with general population as it is specifically non-highway driving. Of course there are factors that are not being taken into account to fully analyze the situation but you will never be able to control for all of them and that does not mean the statistic is useless.

Anyway, I'm being stupid being drawn into this conversation as statistic is a very complex topic and even very knowledgeable and experienced people make mistakes so there's no way to have exhausting discussion on this here... I'll try to refrain on further comments...
Exactly. while it can never be apple to apple unless it's controlled testing... has to be some meaningful comparison...
 
It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.

There are several ways for Tesla to combat this, from a liability standpoint -

  1. Statistically speaking, FSD will be much safer than human drivers in accidents. If your robotaxi is in an accident, and you are not at-fault - then the other persons insurance is to blame. That one is easy.
  2. The accidents that FSD could possible get into when the car is at fault, would likely lead to very minimum damage to occupants. The car is not going to get in an accident driving over the speed limit, and the car has the safest crash test rating. A crash would be very likely to happen at normal driving speeds, so majority of crashes are easily survivable at normal speeds.
  3. If the data proves that accidents with the computer are much safer, and accidents that do happen are mild and lead to very little collateral damage - then all insurance companies will insure the FSD computer. It doesn't have to be just Tesla.

If your car is used for robotaxi purposes, and you're a paying customer, I'd imagine a waver signing required by each occupant stating that they understand in the event of an accident as the at fault vehicle, they are only entitled up to X amount.

Keep in mind that if one occupant even takes a seatbelt off, the car would likely be programed to slow down and pull over safely. Or, the car won't even start the journey. How many of us, when someone isn't wearing a seatbelt, go to that extra length to pull over?
 
Last edited:
  • Like
Reactions: pilotSteve
Exactly. while it can never be apple to apple unless it's controlled testing... has to be some meaningful comparison...
And then controlled testing is criticized because it doesn't represent reality lol

Nonetheless, we have had both (Tesla simulations + Tesla reports + NHTSA reports). Tesla leads on survivability rates and is pretty high up there on crashes per kMiles, if my memory doesn't fail me.
(But before you annoy me with factual details, I didn't care to research again prior to writing this. So, in advance, “good for you!”)

… yet people criticize driving behavior because premium car. Oh, well ¯\_(ツ)_/¯