Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I think a key motivation for the free trial isn't widely appreciated:
I understand what you're saying, but I find it implausible that the current prominent shortcomings of FSD are due to a scarcity of data. Once FSD stops making its current set of very common mistakes (like trying to go straight from a left-turn-only lane), and starts getting into the long tail of very unusual and weird situations, then I'll understand the need to gather 10x or 100x more data. But not yet. The danger of potential customers being alienated by FSD's current common and obvious mistakes far outweighs the benefit to Tesla of gathering a little bit more data for the long tail at this point.
 
Last edited:
There are people on this forum that regularly disengage FSD Beta for having a different driving style to them. Even when they classify it as a safety disengagement, it's their subjective assessment of what counts as safety.
Agreed that there's a catch-22 involved; if FSD starts to make a potential mistake, it's not ethical to continue letting it make that mistake if there's any perceived safety risk, so we can't know for sure what it would have actually done. In 90% or 99% of these cases, FSD would probably correct itself before actually causing damage, but that doesn't mean it's not a safety disengagement. Agreed that e.g. "driving slower than I would" is not typically a safety risk, and shouldn't count as a safety disengagement.

I mentally separate disengagements into "elective" and "necessary" interventions. But even some of the latter are not safety risks per se, but involve things like breaking traffic laws; e.g. driving on the shoulder (thinking it's a lane), or violating a "No Right Turn on Red" sign, or trying to go straight from a left-turn lane, even with no other drivers around to be at actual risk. It's important that FSD gets these sorts of things right. The more it can get them right, the more predictably (to other drivers) it will behave, and that alone increases safety.
 
Once FSD stops making its current set of very common mistakes
Yeah it is weird to do the wide release (except to lessen the sting of the quarter I guess).
The picking the correct turn lane is next level though. It seems fine to release with that limitation.
I'm just mystified that they released FSD in this state:
1) Doesn't even attempt go the speed desired by the user.
2) Won't respond in a normal way to red traffic lights.
3) Frequently can't handle stop signs in a remotely normal way.
4) Cuts it super close to curbs on many turns.
5) Simply stops in lanes of traffic in the middle of a maneuver.

It's WAY better than v10/v11 in most other ways though. (And a couple of these were limitations on v11 as well.)
If they could just fix 1 through 4 it would be incredibly awesome and I could totally understand a release.

I expect it will be years before these items are fixed. Then can start the march of 9s.

They can do free months at any time; people have short memories. I doubt alienation will be much of an issue if people are only subjected to it for a month at a time. Vast majority of people likely turn it on, after about 2 minutes say, "WTF?" and then turn it off and forget about it.
 
  • Like
Reactions: kabin and Ben W
I expect it will be years before these items are fixed. Then can start the march of 9s.
I actually disagree. I fully expect that v12.4 or v12.5 will make huge improvements on these basic issues, on a timeframe of months. That makes it all the more mysterious to me why they're doing this very wide free-trial release now instead of, say, summertime (when the weather would be better anyway, reducing another common cause of FSD failures/disengagements). The subsequent march of 9's necesssary to reach L4 / Robotaxi will still take many years, of course.
 
I actually disagree. I fully expect that v12.4 or v12.5 will make huge improvements on these basic issues, on a timeframe of months.
We'll see. I would be pleasantly surprised if all of the first four on the list are actually solved in a year. I definitely hope I'm wrong. It seems like they should be able to address these things "easily," but we've been here before (admittedly, with lines of code - hopefully training makes the difference!).

Meanwhile, tons of disengagements, primarily for these things.

Though on red light stops I've learned in the cases where it is actually responding that I can press the accelerator gently and carefully feather it in to keep the car out of friction braking and keep things smooth. Still come to a stop just fine of course (it starts stopping early and too aggressively). The profile is such that my typical accelerator profile works. Then there are the other cases where there is no evidence of a response and I have to just disengage preventatively and coast to a stop.
 
  • Like
Reactions: Ben W
We'll see. I would be pleasantly surprised if all of the first four on the list are actually solved in a year.
To be clear, I don't expect them to be _perfectly_ solved in months. Just improved to the point where the degree of failure doesn't seem so inexplicable, and is more in line with the rates of disengagement for other issues on which FSD generally "does well".
I definitely hope I'm wrong. It seems like they should be able to address these things "easily," but we've been here before (admittedly, with lines of code - hopefully training makes the difference!).
I think it can, and will. Some of Tesla's current approach with v12 feels like a very meta version of "prompt engineering"; where the "prompt" is the training set, and fiddling with the training set can affect the ultimate behavior, though it's not always in an obvious or straightforward way. But as they gain experience with this, they will probably gain more insight and control over the process. Once they figure out how to strongly improve e.g. the stop-sign behavior, the insights they gain from that would presumably make it easier for them to solve e.g. red-light behavior and too-tight cornering as well. I expect that solving the much rarer long-tail failure cases may be a different beast altogether, due to such cases requiring far more general intelligence and reasoning.
 
  • Like
Reactions: JHCCAZ
I understand what you're saying, but I find it very difficult to believe that the current prominent shortcomings of FSD are due to a scarcity of data. Once FSD stops making its current set of very common mistakes (like trying to go straight from a left-turn-only lane), and starts getting into the long tail of very unusual and weird situations, then I'll understand the need to gather 10x or 100x more data. But not yet. The danger of potential customers being alienated by FSD's current common and obvious mistakes far outweighs the benefit to Tesla of gathering a little bit more data at this point.
Defending or debating this is admittedly not very easy from the outside. By that I mean two things really, that we are outside of Tesla and also, for most of including myself, outside of the expertise of this rapidly developing field of applied AI technology.

We really don't know how much data and what kinds of data Tesla needs at this stage. It seems like they're past the initial phase of simply needing positive examples of good driving, in various environments and traffic scenarios. If that positive reinforcement method were all they had, then "unlearning" bad behavior would be primarily accomplished by carefully curating and removing any unintended bad examples in the training data, combined with massive quantities of additional positive-reinforcement example training data. In fact, that's kind of what Elon suggested when he and Ashok published that 2023 live stream video in which the car did fail at a turn-arrow traffic light.

In that prior context, it wasn't clear what role the disengagement data would play, i.e. how these negative training examples could be used to teach what not to do. I haven't been searching this much, so I'm probably lacking in my knowledge of popular explanatory links that may be available right now - but AFAIK the current rumors are that Tesla now wants disengagement data. So something has changed in that respect, since the live stream video demonstration and commentary.

If all they really wanted for further progress was a continuing huge source of human driving data, drawn from all over North America and beyond, well that's available already from the majority of Tesla cars that don't run FSD - and in that case the nearly-universal free trial would then be entirely counterproductive to progress! Unless of course, we come back to the theory that Elon actually thinks it's time to make big bucks from people so convinced of FSD greatness, that they will subscribe or purchase in huge numbers...

No one here believes that kind of take rate and financial boon is really likely for FSD (Supervised) in today's form. Nor is the theory of an attempted EOQ stock bump at all convincing; Elon just doesn't think that way no matter how much people like to claim it. And even for those who bear ill feelings towards Elon, I think it's a very uncompelling argument that Elon is dumber endless informed about this than all the rest of us.

Taking all the reasoning above, the simplest explanation that makes sense to me is that
a) Tesla has figured out the ways to use negative reinforcement from disengagement data, and has decided this is now more important than an emphasis on yet more human manual driving data from the customer Fleet.​
b) The most clever way to maximize the flow of disengagement data is to give FSD to anyone and everyone for at least a few weeks. It remains to be seen whether a large enough fraction of these previously non-FSD users will keep it active long enough and often enough to make this a successful strategy - and we probably won't be told one way or the other.​

BTW even though I don't believe it's really the immediate main purpose of the trial program, I have no doubt that there will be a non-trivial uptick of the subscription take rate, because we do see some pretty positive commentary from people who never much liked it before.

There will def8nitely also be those who are disappointed and may be unlikely to try it again very soon, and unfortunately also those who get damaged wheels or worse because they're not ready to handle this arguably too-early release. We can count on the news media to let us know all about these, with as much exaggeration and sensationalism as possible.
 
I actually disagree. I fully expect that v12.4 or v12.5 will make huge improvements on these basic issues, on a timeframe of months. That makes it all the more mysterious to me why they're doing this very wide free-trial release now instead of, say, summertime (when the weather would be better anyway, reducing another common cause of FSD failures/disengagements). The subsequent march of 9's necesssary to reach L4 / Robotaxi will still take many years, of course.
What on earth makes you think that? I've been hearing some version of "it's going to be awesome in 3-6 months" for several years now, and then it's not, and then its all about the next big release or platform change (HW3! DOJO!!! Nothing but nets!!!)

Meanwhile 12.3.3 can't distinguish between a 4 way or 2 way stop sign, and first chance it got, 200ft from my house, pulled out in front of a car with ROW from the right (I had a stop, they did not), stuttered in the middle of the left turn, and I had to punch it to not get hit.

That was my second drive on 12.3.3.

It needs to go several lifetimes with no mistakes, statistically. I have 600k (conservatively, maybe as much as 750k) lifetime miles since 1982, and have 1 at fault fender bender, 3 mph in reverse in a parking lot (would not even be counted as a crash by Tesla). Only reported because my flatbed truck obliterated the bumper, grill and headlight of a newer Camry in 1995.

Average human goes 200k between accidents. If Tesla releases FSD at 51% of average human accidents, then it will be 1/3 as good as me. I refuse to use something that could triple my accident rate.

What the actual F--- are y'all smoking? This is going to seriously hurt or kill someone, and needs to be shut down.

I could go on about all the other places in the drive it failed, but one failure is enough. (took off aggressively from a blind corner, crossing the path of a lane with no stop from the left, no time to check for traffic, failed to check for traffic for a blind merge on the right a block later, refused to attempt a UPL 3 blocks later, disengaging with no warning, in between 2 lanes, rolling slowly, I had to hit the brakes).

It needs to effectively never fail or it's useless. What am I supposed to do if I think it's making a mistake? If I can't see a reason for it pulling off the road into the ditch, it could be 2 things:

1. Saving my life from an imminent collision I can't see.
2. Be about to crash into the ditch and kill me.

What do I do?

So this was a fun test and now I'm done.

Be careful people, don't get compacent.
 
This is going to seriously hurt or kill someone, and needs to be shut down.

It needs to effectively never fail or it's useless.

Admittedly I am far from sure what the situation is, and it's been argued over and over again that we have no data (we have no data), but:

If releasing this Supervised version of FSD lowers accident rates for those using it (in a true side-by-side comparison of using vs. not - which we have no data for, and particularly for this brand new year-old v12, even Tesla does not have that data), then it could well be worth it.

I do have concerns that with the wider release someone will get hurt. But as a committed concern-monger, I can say that I have been wrong every time to date on this point. There have been accidents caused by FSD, but they've luckily not led to someone running straight into a tree (was close though!).

It's even possible that it has lowered accident rates. We just have no idea, since no data with the relevant comparison has ever been published by Tesla on this, on a quarterly basis or otherwise, ever.

But just as a thought experiment, it can fail frequently as long as it is Supervised, and it may improve safety.

Or it may make things worse the better it gets.

What we can be sure of is there won't be any shut down until someone is seriously hurt or killed, and probably not even then (though it depends on what happens).
 
Well that's the right attitude and that's exactly why Elon wanted to push it why using the free trial program. Far too many people on TMC have missed this point, thinking that it's all about a revenue pump, and that Elon expects the take rate will skyrocket overnight. This timing is all about getting disengagement data and real world video plus telemetry.
My guess is its probably a mix of all of these things.
 
... Some of Tesla's current approach with v12 feels like a very meta version of "prompt engineering"; where the "prompt" is the training set, and fiddling with the training set can affect the ultimate behavior, though it's not always in an obvious or straightforward way. But as they gain experience with this, they will probably gain more insight and control over the process.
I'm not disagreeing with your overall point or conclusions, but as a funny aside, I'd argue that technically the part that's analogous to the "prompt" for a language model would be the situation you're driving in, so prompt engineering would be analogous to driving on better roads that FSD can more easily understand. Which is technically one valid solution, I suppose, just rearchitect all our roads to a layout that FSD understands 😜
 
  • Like
Reactions: JB47394
I do have concerns that with the wider release someone will get hurt. But as a committed concern-monger, I can say that I have been wrong every time to date on this point. There have been accidents caused by FSD, but they've luckily not led to someone running straight into a tree (was close though!).

It's even possible that it has lowered accident rates. We just have no idea, since no data with the relevant comparison has ever been published by Tesla on this, on a quarterly basis or otherwise, ever.
The basic problem of course is that FSD IS supervised. Even if we had data from Tesla, what can we say? That the combination of FSD and a driver has such-and-such an accident rate. But how much of that is the human disengaging or intervening? Or drivers simply cherry-picking which drives to do "by hand" since they have enough experience to know FSD wont handle it well (I know I've done that in the past). Even Tesla cannot see THAT data. It's the usual problem of a biased sample set.

Pretty much the only way you could really find out would be to totally let the car take over and NEVER intervene and see how the car does, even to the point of letting it crash. Which of course raises huge ethical, legal and financial problems.

For my part, I tend to think that, humans being what they are, the car is probably doing pretty well. There are a LOT of beta testers out there now (and even more since the recent trial rollout), and regardless of all the warnings Tesla give them, a significant percentage are going to just stare into the middle distance while "driving" and expect FSD not to mess up. Yet there really haven't been many major FSD incidents that I'm aware of, and I dont buy that this is because every tester is being super-vigilant all the time. People are just too complacent and lazy, especially now the car is good enough to let that happen.

So the next few months are going to be interesting. I think testers/users of FSD are going to let their guard down somewhat, and we will either see a sudden spike in FSD related accidents (verifying the worst fears of the pessimists/naysayers) or nothing much will happen, which tends to suggest that perhaps, yes, the car is getting kinda good at driving.
 
. Even if we had data from Tesla, what can we say? That the combination of FSD and a driver has such-and-such an accident rate. But how much of that is the human disengaging or intervening? Or drivers simply cherry-picking which drives to do "by hand" since they have enough experience to know FSD wont handle it well (I know I've done that in the past). Even Tesla cannot see THAT data. It's the usual problem of a biased sample set.
That is the issue but with the information at their disposal I think Tesla could attempt to compare these situations. It’s tricky but they could at least attempt to compare - or provide raw data to a willing party to look at it.

There are so many people driving with and without FSD that Tesla can do way better than their silly breakdown to date. Not making an attempt is a short-sighted cop out.
 
So had a few interesting experiences with 12.3.3

I was on a 4 lane road approaching a group of cars stopped at a red light with me being in the right lane approx 4-5 cars back from the intersection. A woman had stopped short in the left lane to allow someone to cross in front of her/me into a parking lot blind. The cameras saw the gap and slowed down… then it saw her and stopped to allow her to cross and then continued on its way once she cleared the street. Definitely dangerous on their parts but was impressed with FSD.

The second interesting thing I thought was noteworthy and unexpected was passing a car on the right, on a single lane highway.

I was following in a train of 3-4 cars just letting FSD do its thing when a car in the front of the line stops to make a left turn. It’s not exactly legal, but people often pass on the shoulder on this stretch of highway, and was no different this time… with zero hesitation, my model 3 followed the 2 cars in front of me that passed said car on the shoulder and got right back into the appropriate lane.

I was actually quite shocked it would allow itself to cross the white line like that but assumed it was a “when in Rome” type of decision because I cannot see the car doing that again if it didn’t observe another vehicle doing it. Very cool stuff either way
 
At least it still works great on the freeway. I would guess that Tesla will now prioritize “fixing” the freeway problem.
I but you (we) have at least a couple extra months now of v11/highway since it is likely v12 progress has been slowed in lieu of implementing the "toilet Tweet" to offer a FSD trial to all "next week". Still not even close to meeting this since they still need to integrate 24.8.x leaving probably 30% still to get it.
 
Coming out of a small side street today had to disengage since FSD continues to ignore the no left turn sign. Has anyone had FSD behave correctly to this type of sign?

View attachment 1035275
Sign reading is very limited and mostly just standard Stop and Speed Limit signs. No Left Turn (like No Right on Red) is based on inaccurate map data.
 
no, just all the cool people
JulienW would you come inside and COOL off so you can get 12.3.3.
Screenshot 2024-04-04 at 6.38.59 AM.png