Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD getting things right

This site may earn commission on affiliate links.
Hi all -

I've been using FSD for the last 8 months. And while I experience a fair amount of screw-ups and disengagements, I've also noticed how comfortable I'm getting using FSD on every drive. It occurred to me that we (rightly) put a heavy focus on where FSD needs to improve. But I think this can diminish the progress it's made over time. I take for granted all the situations it now handles routinely. Things that created disengagements in the past. And capabilities that - for the most part - are unknown to the average driver (maybe even the average Tesla driver!).

So over the last week, I saved and labeled clips of all the things FSD is doing right. I don't have a GoPro setup in my car so these are just dashcam clips. You'll have to take my word that FSD is engaged on each clip and I have not disengaged at all during these videos.

Let me know what you think. Despite its imperfections, FSD now handles many every day driving situations really well.

Video:

 
I generally agree with you. Many of us tend to focus mostly on what it does wrong and we tend to forget all the things it does do right.

I think FSDb suffers from a case of overpromising and underdelivering. If a certain someone could temper people's expectations, I think most people would be more impressed and amazed but if you promise the moon and fail to deliver, people will get upset.
 
I generally agree with you. Many of us tend to focus mostly on what it does wrong and we tend to forget all the things it does do right.

I think FSDb suffers from a case of overpromising and underdelivering. If a certain someone could temper people's expectations, I think most people would be more impressed and amazed but if you promise the moon and fail to deliver, people will get upset.
Totally agree. Overpromising is annoying because FSD's capabilities that exist *now* are actually really impressive. And I don't think people can fully appreciate how much it can do until they get behind the wheel with it.

But if you haven't driven with it, most of what you see are examples of things it's doing wrong. Like the several regular disengagements on my daily drive (often due to mapping imo).

I think most non-Tesla drivers would be really surprised by how much it can already do.
 
  • Like
Reactions: sfRonH
Preface, the OP is saying FSD but I am going to make the assumption that they mean FSDb. My comments below are directly concerning FSDb, but don't necessarily negate the same possibilities for FSD.

While I agree that there is a lot that the car does great, the problem is that with every update you have to go through a re-trusting phase because you don't know what may have regressed or what new errors might pop. For example, my normal daily commute used to be pretty uneventful on FSDb except for a couple of phantom braking issues and lane change for route errors... But I think it was with 11.3.6 FSDb that lane changes onto exit ramp lanes turned semi-violent and actually I believe will touch the right lane line. In addition it is now incorrectly trying to take HOV ramps also semi-violently.

So just remember, with every update it is a good idea to check your trust of FSDb on your normal commutes and in places where it previously behaved properly, because it may have changed behavior.
 
Preface, the OP is saying FSD but I am going to make the assumption that they mean FSDb. My comments below are directly concerning FSDb, but don't necessarily negate the same possibilities for FSD.

While I agree that there is a lot that the car does great, the problem is that with every update you have to go through a re-trusting phase because you don't know what may have regressed or what new errors might pop. For example, my normal daily commute used to be pretty uneventful on FSDb except for a couple of phantom braking issues and lane change for route errors... But I think it was with 11.3.6 FSDb that lane changes onto exit ramp lanes turned semi-violent and actually I believe will touch the right lane line. In addition it is now incorrectly trying to take HOV ramps also semi-violently.

So just remember, with every update it is a good idea to check your trust of FSDb on your normal commutes and in places where it previously behaved properly, because it may have changed behavior.
Yes of course. The video is labeled FSD Beta. I definitely experience the same on updates, having to be more cautious to understand potential new behaviors. Part of the beta testing experience I suppose. And I experience similar issues driving around northern VA: problems include HOV lanes/ramps, toll booths, some highways exits at high speed, etc. On highways exits, I've found v11.4.7 to be an improvement from previous versions (but not perfect).

My point was that these known issues are discussed repeatedly here and elsewhere. They should be prioritized by Tesla and fixed.

But what we often *don't* discuss enough is the number of routine situations that FSDb now handles really well.
 
  • Like
Reactions: Yelobird
Hi all -

I've been using FSD for the last 8 months. And while I experience a fair amount of screw-ups and disengagements, I've also noticed how comfortable I'm getting using FSD on every drive. It occurred to me that we (rightly) put a heavy focus on where FSD needs to improve. But I think this can diminish the progress it's made over time. I take for granted all the situations it now handles routinely. Things that created disengagements in the past. And capabilities that - for the most part - are unknown to the average driver (maybe even the average Tesla driver!).

So over the last week, I saved and labeled clips of all the things FSD is doing right. I don't have a GoPro setup in my car so these are just dashcam clips. You'll have to take my word that FSD is engaged on each clip and I have not disengaged at all during these videos.

Let me know what you think. Despite its imperfections, FSD now handles many every day driving situations really well.

Video:

That's a great comp! Yeah, FSDb still isn't perfect but it does things that amaze me on each drive (with some cringey moments as well).

Here's how it handled a red light runner on a recent drive in my Model Y:

 
FSDb is both incredible and mildly terrible sometimes.

Occasionally I step back and think about how my car mostly drives me around. It's amazing! You really get used to it using it all the time. The real sign of progress for me is how many situations I want to use FSDb because it's easier (or better or more relaxing or feels safer) vs using it just to test how well it can do.

And 99% of people have never experienced this and/or have no idea it exists! Do you ever think about when on the road if any other drivers around have any notion the car is driving itself?

I wonder how today will look in hindsight 5 years from now.
 
Eagerly awaiting the opportunity to use it one day in the Uk. But I suspect the big issue is roundabouts.

We even have ‘magic roundabouts’ near me which are multiple linked mini-roundabouts which make up a giant bi-directional roundabout (with not all humans tackling them very well).

But nice to see a positive post about FSD.
 
FSDb is both incredible and mildly terrible sometimes.

Occasionally I step back and think about how my car mostly drives me around. It's amazing! You really get used to it using it all the time. The real sign of progress for me is how many situations I want to use FSDb because it's easier (or better or more relaxing or feels safer) vs using it just to test how well it can do.

And 99% of people have never experienced this and/or have no idea it exists! Do you ever think about when on the road if any other drivers around have any notion the car is driving itself?

I wonder how today will look in hindsight 5 years from now.
Feel the same way. I think about that regularly - most other drivers barely know what Autopilot does, much less what FSDb is capable of. Tesla certainly doesn't market it. And until it's out of beta, I don't see why they would.

Is FSDb offered to test during demo drives? Sometimes I want to offer to friends just so they can try it for themselves.
 
I traded in a 2018 Model S 100D for a new Model S Long Range to take advantage of the FSD transfer deal (not to mention the price drop).

It took a few days to get a software update, but I finally have 11.4.4 in the new car.

It's fascinating to see the new car behaving exactly the way the old one did, in the same places. It does the same good things and the same crazy things. I know what it does well and what it does badly, and when to take over. It was interesting and reassuring to see it behaving deterministically.

One of the things I remind myself is that the car doesn't remember anything about the drives it takes. It sees every intersection and situation for the first time, every time. Which is probably how it should be -- behavior carved into firmware.

I also know the V11 branch isn't going to get much attention from here, and I'm looking forward to v12.
 
One of the things I remind myself is that the car doesn't remember anything about the drives it takes. It sees every intersection and situation for the first time, every time. Which is probably how it should be -- behavior carved into firmware.

That's the catch with programming "human-like" behavior, isn't it? Without memory/recall, it can't learn from mistakes and avoid them on future attempts. I thought the FSDb feedback loop would eventually help solve these things, but for the most part it hasn't.

The good news is, it's becoming better at handling routine driving for the "first time" when encountering these scenarios. The frustrating part is that the same things continue to be a problem, and have not been fixed.

Perhaps v12 will allow a more "human-like" feedback loop to learn/improve given the training it will have from human driving. Looking forward to it as well.
 
Without memory/recall, it can't learn from mistakes and avoid them on future attempts.
Without someone to explain why an outcome is a mistake and what to do about it, they'd end up with a mess. That's the job of the FSD engineers at Tesla - to decide what is a mistake and how to deal with it. The car can't do anything by itself because it's not sentient. As I understand self-directed learning, it involves lots of experimentation, and that doesn't sound very practical for a consumer autonomy system.

The loop you want is there in V11, but it involves interventions by drivers to generate data that goes to Tesla which is then reviewed and considered. Once the engineers decide that a given behavior is a mistake, they get to work on the heuristics of the control system to stop making that mistake. The problem there is the hand-crafting of the heuristics to avoid making the mistake.

The loop should be tighter with V12. It'll still start out the same way, but instead of hand-crafting heuristics, they'll come up with training data that shows how to properly deal with the problem situation. They'll train the system on that data and, hopefully, the problem is fixed.
 
Perhaps v12 will allow a more "human-like" feedback loop to learn/improve given the training it will have from human driving. Looking forward to it as well.
I'd be very surprised if they made the neural net maleable in the next decade or so. It would mean allowing the car to learn bad behaviors as well as good ones.

It would make for an interesting experiment, to have a learning car. Maybe we'll see some company offer one, with the appropriate liability releases. :)
 
Got a loaner with it city driving is basically useless with semi traffic, I mean it can drive and figure things out but it takes it’s sweet time, slows down brakes constantly takes up two lanes while it’s trying to figure out what in the actual eff it’s trying to do. Then when it figures it out and decided to go it slams on the power and makes you take a turn at 20-25mph lol.

It’s def cool but so are a lot of faddy unusable things, but does Tesla need to be applauded for writing basically hello world and some basic features 5 years ago?

I don’t foresee this getting astronomically better than current state anytime soon. Even feeding it a bunch of data.

It seems the fault is at the logic level, because even with no data all it has to do is make a split ms decision that the average driver makes everyday and act.

Or it could be the hardware chosen such as just using cameras right now can’t calculate the things it needs to make a proper decision quickly enough idk.

Maybe the unit processing everything isn’t strong enough to run the whole car plus all the self driving calculations quick enough.

Another thing I noticed is pedestrian crossing to your right, the car stops like the pedestrian is in front of it, whereas a normal driver would pull up to the turn and wait, not stop basically 2 car lengths behind.

And it constantly does weird things like this… it just seems confused unsure of itself 90% of the time.
 
Last edited:
I'd be very surprised if they made the neural net maleable in the next decade or so. It would mean allowing the car to learn bad behaviors as well as good ones.

It would make for an interesting experiment, to have a learning car. Maybe we'll see some company offer one, with the appropriate liability releases. :)
That seems like a disaster in the making where what we deem morally good the Neural Net doesn’t care and gets rewarded for morally questionable actions that leads to deaths lol.
 
  • Like
Reactions: BridgeMojo
That's the catch with programming "human-like" behavior, isn't it? Without memory/recall, it can't learn from mistakes and avoid them on future attempts. I thought the FSDb feedback loop would eventually help solve these things, but for the most part it hasn't.

I think it has, personally. It's just a sloooooow cycle!

FSDb is a lot better than it was two years ago, thanks in large part to all the testing done and telemetry Tesla gets back. It's easy to forget, but based on the youtube videos (that I watched a lot of) FSDb was pretty terrible not very long ago.

Obviously we'd all like FSDb progress to be going faster, but personally I think Tesla's core philosophy and strategy for testing at scale with a large fleet of vehicles and collecting intervention & mistake telemetry back is the right one.

The system architecture we're all driving today in V11 also has no capability of "learning" from mistakes directly. The driving decisions are hand coded by software engineers, so fixing a intervention means a team probably painstakingly diagnosing what's going on and designing new driving decision making code.

V12 "end to end" AI can in theory start to change that. Whenever an intervention happens Tesla could capture the telemetry + video snippets of what the driver actually did and feed that into the training set. That directly connects interventions -> training the car to do something different.

I bet in practice this is a lot more complicated that it sounds. You don't want to train on snippets of drivers doing unsafe things, snippets where the driver disengaged FSD may be poor training examples because the driver was correcting an already bad situation (vs driving well throughout), maybe it's better to identify a general type of situation that works poorly and poll whole fleet for human driven snippets (these could come from non-FSD cars too!), etc.

And dealing with that sheer amount of data must be incredibly complicated. I guess the big challenges for the next few years may be around data curation, polling for useful training data efficiently, filtering out bad driving data, and all the technical challenges of the sheer scale of this (more datacenters!)
 
I think it has, personally. It's just a sloooooow cycle!
FSDb is a lot better than it was two years ago, thanks in large part to all the testing done and telemetry Tesla gets back. It's easy to forget, but based on the youtube videos (that I watched a lot of) FSDb was pretty terrible not very long ago.
Obviously we'd all like FSDb progress to be going faster, but personally I think Tesla's core philosophy and strategy for testing at scale with a large fleet of vehicles and collecting intervention & mistake telemetry back is the right one.
The system architecture we're all driving today in V11 also has no capability of "learning" from mistakes directly. The driving decisions are hand coded by software engineers, so fixing a intervention means a team probably painstakingly diagnosing what's going on and designing new driving decision making code.
V12 "end to end" AI can in theory start to change that. Whenever an intervention happens Tesla could capture the telemetry + video snippets of what the driver actually did and feed that into the training set. That directly connects interventions -> training the car to do something different.
I bet in practice this is a lot more complicated that it sounds. You don't want to train on snippets of drivers doing unsafe things, snippets where the driver disengaged FSD may be poor training examples because the driver was correcting an already bad situation (vs driving well throughout), maybe it's better to identify a general type of situation that works poorly and poll whole fleet for human driven snippets (these could come from non-FSD cars too!), etc.
And dealing with that sheer amount of data must be incredibly complicated. I guess the big challenges for the next few years may be around data curation, polling for useful training data efficiently, filtering out bad driving data, and all the technical challenges of the sheer scale of this (more datacenters!)
 
AI training is just another musk promise yes it will help but it will just create newer problems or it will still mess something life threatening up.

I’ve used machine learning countless of times it works until it doesn’t and learns some crazy thing all on its own.

If you have proper input hardware, processing hardware, and some logic programmed it shouldn’t be rocket science no pun intended…

How can a Tesla smash head on into things? Doesn’t their forward collision braking work?

If they are using correct input hardware how can it possibly collide into anything?

I guess maybe a lot of the problems are he’s opted to only use cameras instead of actual LiDAR radar various types of sensors proven over the decades.


If they can’t get the basic down…well….
 
  • Funny
Reactions: gsmith123