Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
Definition of FSD is actually quite simple - hands off / eyes off the road. Point to point.

May be not all over the world or interstate or ... but, atleast should be in large geographic areas.

Re: "Eyes-off / Hands-off"
I really like Mobileye's charts to make it clearer how they see it. They do break that (eyes-off / hands-off) down into these 3 (salmon color rectangles)
  • Highway
  • Arterial & Rural
  • Urban
Mobileye CES 2023 Product Portfolio.jpg

Mobileye CES 2023 Product-Oriented Taxonomy.jpg
 
I found this "Dirty Tesla" FSD Beta video to be extra insightful as it's a perspective from someone outside of early adopters wanting to push the limits of FSD Beta 11.


I wonder if many others with initial hesitance about FSD Beta will be similarly delighted:

To now where every time I get a chance to turn it on, it's this big relief.
I can relax a little bit. The car is keeping me in my lane. That alone is huge.​
I can't wait till I get to this next road so that I can put beta on, and then I can relax again.​
It makes me so much more observant. I can see so much more.​
When I actually have to drive the car, it feels like so much more work.​
I'm at a point where I would be really sad if I didn't have beta.​
 
How did you escape FSDj? I’ve tried everything, to no avail.
I created a service appointment, per this post:

 
I created a service appointment, per this post:

Tried that twice; both times two different SCs said they could not over ride FSDj.
 
  • Like
Reactions: LSDTester#1
New release notes for FSD Beta v11.4.8:

- Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.

- Introduced a new efficient video module to the vehicle detection, semantics, velocity, and attributes networks that allowed for increased performance at lower latency. This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.

- Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.

- Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.

- Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled datasets, introducing the new video module, and aligning model training and inference more closely.

- Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.

- Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.

- Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.

- Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.
 
New release notes for FSD Beta v11.4.8:

- Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.

- Introduced a new efficient video module to the vehicle detection, semantics, velocity, and attributes networks that allowed for increased performance at lower latency. This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.

- Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.

- Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.

- Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled datasets, introducing the new video module, and aligning model training and inference more closely.

- Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.

- Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.

- Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.

- Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.
Does this mean park assist, summon, ASS are not available for cars with HW3?
 
New release notes for FSD Beta v11.4.8:

- Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.

- Introduced a new efficient video module to the vehicle detection, semantics, velocity, and attributes networks that allowed for increased performance at lower latency. This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.

- Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.

- Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.

- Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled datasets, introducing the new video module, and aligning model training and inference more closely.

- Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.

- Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.

- Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.

- Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.
Equivalent to 2023.28.x?

qPTb0NE.jpg
 
- Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.
This is so overdue. I don't know how many times I try to engage FSD but it only engages cruise control - sometimes we realize only AP is engaged when the car doesn't steer as expected.

Infact what I'd like is to have a configuration option that disables CC.
 
New release notes for FSD Beta v11.4.8:

- Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.

- Introduced a new efficient video module to the vehicle detection, semantics, velocity, and attributes networks that allowed for increased performance at lower latency. This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.

- Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.

- Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.

- Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled datasets, introducing the new video module, and aligning model training and inference more closely.

- Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.

- Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.

- Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.

- Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.
NET: Worse than prior version in real world situations - If history can be used as a guide.
 
  • Funny
Reactions: LSDTester#1
Statements like these are always confusing to me. I’m glad it’s better, but what does ‘6%’ mean?
I've interpreted them to me an improvement in "recall" without making "precision" worse. Which I understand to mean as it will correctly identify 6% more distant object crossings. They don't define what distant object crossings mean but I guess that to mean figuring out if the distant object is going to be relevant to planning.

This might be helpful:

Precision and recall - Wikipedia
 
Statements like these are always confusing to me. I’m glad it’s better, but what does ‘6%’ mean?
That stood out to me as well. At single digits they are likely getting close to max performance. My WAG would be FSD detects distant crossing objects at some probability and that probability is expected to improve by 6%.

It would be nice to know how 'distant' is being defined. For me it's the poor closer (3-5 secs away) crossing object kinematics that fails resulting in needless excessive braking even in city driving.