Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Redundancy is a part of how Elon's companies quickly iterate. For example, while SpaceX was launching Starship 11, Starship 12 and 13 were already built and in the high bay.

It takes long enough to build a Starship that waiting for 11 to launch before building 12 and 13 would have been wasted time. I'm sure it's the same with FSD.
This makes a lot of sense when you learn a significant amount from each iteration.


  • Low visibility requiring significant creep so the B-pillar can see
  • Fast cross traffic
  • Multiple lane traffic going in both directions
  • Limited median space to hang out, requiring careful stoppage to avoid having your rear end hit
  • Requires fast acceleration
Item 4 isn't needed and is part of the reason for failure. It makes it very complicated. I crossed off 5 since it's not a challenge.

I agree (and I made this point originally) that the turn provides the opportunity for development of key required capabilities needed in many parts of driving (see elsewhere). But assuming those are in place (which they would need to be for basic function elsewhere!), it should be pretty easy.

Would you please share with us what you consider a complex task
Predictive tasks:
Anticipate when a driver is going to change lanes without signaling, before they even start to do so.
Understanding what other drivers on the road want to do and make room for them or avoid them as needed.
Anticipate a light is going to change based on traffic patterns and general knowledge of timing, and slow down/continue in advance.
Analyzing traffic ahead, stopped at lights, etc., and selecting the right lane for minimal time of travel by getting past as much stopped traffic as possible. Avoiding any stopped lanes well in advance to avoid last minute lane changes or lane changes from a stopped position.
Analyzing and predicting whether a last minute lane change can be conducted safely with plenty of margin without hindering traffic flow, to avoid having to sit in a long line of turning traffic, or whether sitting in traffic is required.
Etc.
Perception tasks:
Monitoring wheel position of turning vehicles and change in wheel position and responding accordingly.
Monitoring other subtle signs for prediction of future behavior (tied to prediction).
Monitor underneath parked cars for feet or animals.
Etc.
Defensive driving behaviors not above:
Keeping buffer zones as much as possible.
Etc.
Stopping smoothly and consistently, apparently? Apparently that is very complex since it has to wait for "fit and finish" to be completed.

Whatever issues they are fixing need real world verification that only a public release can give. I'm sure a lot of things work well in simulation vs in the real world.
Then they should definitely fix the simulation! It's not much good if it doesn't work.

the team agrees that FSD can't perform basic maneuvers.
I bet their disengagement stats show otherwise. The Pareto must be nuts.
 
  • Like
Reactions: swedge and Ben W
"Simplest task". Really?
  • Low visibility requiring significant creep so the B-pillar can see
  • Fast cross traffic
  • Multiple lane traffic going in both directions
  • Limited median space to hang out, requiring careful stoppage to avoid having your rear end hit
  • Requires fast acceleration
There is a reason some locals intentionally ignore this UPL.

Would you please share with us what you consider a complex task :)
It’s just a physics problem. With good perception you could probably do it 99% of the time with a simple planner that assumes that all vehicles continue traveling with their current speed and trajectory. There are no pedestrians, the cameras can see everything, and you don’t have a problem of other vehicles obscuring your view.
I think this is much harder to get to 99%:
 
I could see this if you had a stable product and were just tweaking it or adding new features. Just doesn't seem to match the inchoate nature of FSD at the moment.
Because when a release hits the field, All Sorts Of Things, Both Good and Bad Happen. The feedback from those events drive, if not the next release, certainly the one after that.

As I'm sure you'd be the first to point out, FSDS isn't fully baked as of yet. Testing and feedback are the equivalent of putting another minute on the timer for some baked good that's never been made before and nobody knows how long it's going to take. And we're all a bunch of toothpicks being shoved in to see if the batter on the bottom is cooked or not yet.

Wow.. an analogy that doesn't involve cars, directly. 😁
 
It’s just a physics problem. With good perception you could probably do it 99% of the time with a simple planner that assumes that all vehicles continue traveling with their current speed and trajectory. There are no pedestrians, the cameras can see everything, and you don’t have a problem of other vehicles obscuring your view.
I think this is much harder to get to 99%:
"Assumes that all vehicles traveling with their current speed". Thats a big assumption since we know cars change lanes and speeds. Not to mention sometimes they are going at high speeds where the room for error is less.

Comparatively speaking this UPL is certainly not a simple intersection. Another reason why some locals avoid it altogether. Physics indeed!

For those who feel this is a simple intersection why hasn't Tesla solve it?
 
Last edited:
Chuck’s UPL.

some locals intentionally ignore this UPL.
OMG
There is no word that starts with P in here - Unprotected Left Turn
For a bunch of folks who can spend 50% of a thread discussing what "full" means or what "driving" means, or the semantics of where to stop at a stop sign, we are safely able to ignore poor old "Unprotected" being bifurcated into "Un Protected" for the sake of an appalling TLA 🤣🤣

OK - I'll let myself out now, thank you. Let the abuse of "unprotected" continue.
 
"Assumes that all vehicles traveling with their current speed". Thats a big assumption since we know cars change lanes and speeds. Not to mention sometimes they are going at high speeds where the room for error is less.

Comparatively speaking this UPL is certainly not a simple intersection. Another reason why some locals avoid it altogether. Physics indeed!
I’m not talking about Robotaxis where 99% would horrendous. Locals are trying to get way higher than 99% success. I bet they could easily do the turn 99 times in a row.

It’s almost the simplest possible left turn from a side street on to a highway.
I’ve never seen it have pedestrians.
It doesn’t have opposing traffic.
The road is straight.
There’s a median.
 
  • Like
Reactions: AlanSubie4Life
12.3.4, 12.3.5, 12.3.6 still have the same bug for me: change lane to the right to enter freeway south instead of going straight to enter freeway north. Previous versions didn't have this problem.

12.3.6 seems to fix problem of not slowing down for street bumps.
 
If you want to work while traveling- upgrade to a Model S. It is larger and will be more comfortable for Laptop type work. I do it all the time and it is great.

I don’t agree with the statement that was made about FSD being “incompetent co-driver to keep in check”

Many of us use FSD on city streets every day quite successfully.

Just think about it- via AI and the expansion of the neural net using what will be billions of visual examples of what a competent driver looks like- FSD will make driving even safer.

I am hopeful we can get to that goal as it will be good for all of us.
Robotaxi is going to have a hard time competing with Uber for speed. Had to use it, and both drivers today were rolling through stops at 15, though the second was really good about driving exactly the limit around schools.

FSD plus regen in general have “trained” me to slow down more at stops which isn’t a bad thing, though there’s still no way I’m locking up the wheels if it’s not necessary.
 
OMG
There is no word that starts with P in here - Unprotected Left Turn
For a bunch of folks who can spend 50% of a thread discussing what "full" means or what "driving" means, or the semantics of where to stop at a stop sign, we are safely able to ignore poor old "Unprotected" being bifurcated into "Un Protected" for the sake of an appalling TLA 🤣🤣

OK - I'll let myself out now, thank you. Let the abuse of "unprotected" continue.
Hah, it's ULT.
 
Whether it is 12.3.4 or 12.3.6 the car still goes to the extreme left lane when I have to exit in less than a mile and it was during a rush hour in my area. It was the 495 beltway in NoVa one day and it was 95S near Baltimore Washington Pkwy the next day. There was no way it can take the exit with all the traffic.

This was new with 12.3 .6. "Tollbooth detected". There is no tollbooth on that road, Greenbelt Road (193 West) going towards Baltimore Washington Pkwy. What is the car seeing?

8758 MD-193
 
I bet their disengagement stats show otherwise. The Pareto must be nuts.
I think people who can't stand the basic functions like stopping at stop signs etc are not going to use FSD and disengage at every stop light or sign.

This in general is a bigger issue for Tesla to figure out why so many drop out. May be analyze all the disengagements before someone drops out.
 
I think people who can't stand the basic functions like stopping at stop signs etc are not going to use FSD and disengage at every stop light or sign.

This in general is a bigger issue for Tesla to figure out why so many drop out. May be analyze all the disengagements before someone drops out.

It’s not just the slow stop sign routine for me. I disengage a lot because it waits too late to stop and then has to use brakes. I prefer a super smooth and efficient stop with regen only. Seems like calculating a perfectly timed regen only stop is something a computer would excel at, but apparently not. Or, in the case of end to end AI, it’s probably just mimicking the cruddy stopping behavior of most humans… accelerate as far as possible and then slam on the brakes at the last moment.
 
I consider everything running on a computer a program. A program generated by a few programmers!

You know why FSD hesitate more on empty intersections? I think its because the "program" have to evaluate more pixels to make sure there is no moving object!
You're not wrong. Every pixel of every frame has to be evaluated through a monolithic neural network, so the reaction time (or lag time) is pretty much constant regardless of what it's reacting to or evaluating. This is somewhat different from the human body, which has extra-fast reflexes for particular situations such as touching a hot stove. (The spinal cord has its own ultra-fast "neural network" that overrides the brain for such things.)

This lag is an unavoidable downside of using pure vision, versus radar or lidar. Vision, even if it can reconstruct a 3D scene perfectly, has an unavoidable and significant lag time due to the sheer amount of signal processing required to do the reconstruction. Lidar bypasses most of this processing by obtaining the 3D map directly, so it can enable near-instant reflexes, a huge advantage in certain situations. (Such as a kid jumping out from behind a car, or a driver running a red light and about to T-bone you.)

That said, with a completely end-to-end v12-style neural network (vision in and control out), there is not necessarily any single point in the network (or "program") that represents the concept of "moving object". All of the logic is scrambled together and interrelated, from start to finish. This is responsible for the incredible power of deep neural networks, but also for its inscrutability. When the network does make a mistake, there's often no way to precisely figure out why, or how to fix it. The entire AI industry is grappling with this double-edged sword, and it's why problems that require near-100% accuracy (such as L4 FSD) are so difficult to solve with neural networks. It's also why I expect city-streets vision-only FSD to remain firmly L2 for at least several more years, and probably end up requiring additional direct sensors (radar/lidar) to solve optimally.
 
  • Informative
Reactions: edseloh and GWord
But why not just skip v12.4 and hold off on public release of 12.5 until it is ready?

Just seems pointless to take the risk of releasing something which may have dangerous regressions when you know you have an even a better version a month or two further down the pipe. What's the benefit?
Because 12.3 is substantially worse than 12.4, so it would be irresponsible of Tesla to withhold 12.4 while they're still setting up 12.5. (And by the time they're approaching 12.5, you'd be able to make the same argument about waiting for 12.6.) The perfect should not be the enemy of the good, and in particular, the perfect shouldn't be the enemy of the better-than-what-we-have-now!

They will test v12.4 enough to shake out any truly dangerous regressions, and despite the occasional minor regression, each new version has been a distinct overall improvement in my experience. 12.5 will go through the same process, and will be at the same risk of regression as 12.4, so again it's not a reason to skip 12.4 in favor of 12.5.
 
It’s not just the slow stop sign routine for me. I disengage a lot because it waits too late to stop and then has to use brakes. I prefer a super smooth and efficient stop with regen only. Seems like calculating a perfectly timed regen only stop is something a computer would excel at, but apparently not. Or, in the case of end to end AI, it’s probably just mimicking the cruddy stopping behavior of most humans… accelerate as far as possible and then slam on the brakes at the last moment.
I don’t know of any driver who stops like FSD does. And I have never/rarely seen any other driver on the road stop that way.

It’s some weird training error problem they should be able to simulate and then figure out why it is happening. Or one of their other training inputs or guardrails is overriding the correct control, probably for safety.

They either know what the problem is or they are not bothering to simulate it.
 
  • Like
Reactions: Eugene Ash
I seem to read reports of people having more issues with autopark in parking garages. I think they probably just need more parking garage training data. People parking outside for the most part seem to have good success.
This is what FSD v12.3.6 thinks of my workplace's perfectly ordinary, well-marked, outdoor parking lot. I am frankly terrified to try Autopark here.
FSD_Autopark_Fail.jpg
 
That said, with a completely end-to-end v12-style neural network (vision in and control out), there is not necessarily any single point in the network (or "program") that represents the concept of "moving object". All of the logic is scrambled together and interrelated, from start to finish. This is responsible for the incredible power of deep neural networks, but also for its inscrutability. When the network does make a mistake, there's often no way to precisely figure out why, or how to fix it. The entire AI industry is grappling with this double-edged sword, and it's why problems that require near-100% accuracy (such as L4 FSD) are so difficult to solve with neural networks. It's also why I expect city-streets vision-only FSD to remain firmly L2 for at least several more years, and probably end up requiring additional direct sensors (radar/lidar) to solve optimally.
I'm worried the E2E approach will get us very far very fast, but will ultimately be a dead end. Machine learning has a degree of randomness, which works fine for text and image generation, but not for making driving decisions. And as you said, diagnosing errors is difficult to impossible with machine learning. I don't think this foundation is solid enough to be trusted for humanless driving.

I think Tesla's previous path was the correct albeit more difficult one. Driving should be solvable with heuristics, supplemented with AI for the parts where there is a very large solution space (e.g. identifying objects from images and path finding). Driving is complex; they just needed more time to pin down all of the rules.

Ironically the problem of autonomous driving would be a lot simpler if all cars had the hardware. Then the cars could communicate their position and intentions precisely with each other. There would still be the issue of outside events to deal with however, such as animals or debris entering the road. Maybe at some point they will start building the hardware into new cars in preparation of an autonomous-only transition date.
 
Robotaxi is going to have a hard time competing with Uber for speed. Had to use it, and both drivers today were rolling through stops at 15, though the second was really good about driving exactly the limit around schools.
There's an important point here, which is that real-world driving is quite a lot about having the general intelligence to know when and how to bend the rules, and equally important, when not to. Mechanistic adherence to the rules will always feel un-humanly robotic. There's also a social aspect where you're going to probably give a human Uber driver the benefit of the doubt for doing something slightly out-of-bounds (like doing a 3-point turn using a private driveway), while with a non-human Robotaxi such fudging is more apt to be reported and penalized. Until the car has the ability to literally explain its actions (and I do believe KITT-style conversational FSD will one day be a thing), there's going to be a double standard here.