Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigate on Autopilot is Useless (2018.42.3)

This site may earn commission on affiliate links.
Been trying to drive the non-AP1 cars more the past few days... and man... I just don't think I can do it.

I was on the interstate very early this morning, was virtually no traffic. I'm going on a relatively straight stretch of road with zero vehicles or obstacles ahead of me for almost a full mile. What's the Model 3 decide to do? Oh, right, tries to break my neck with a full panic brake from 75MPH to at least 45 MPH as I react and mash the accelerator to undo it.

That's on top of, while trying to use NoA, trying to pass non-existent vehicles, attempting lane changes into the median, tailgating a semi at over 20MPH slower than my set speed for over about a full minute before even considering a lane change, and having no clue whatsoever what to do for lane keeping in a surprisingly well marked construction zone (AP2 consistently appears to aim for the barrel barricade to the right as the lanes shift left, AP1 follows the temporary markings just fine).

tracey-morgan-no.gif


AP1 on the other hand can do this entire route with zero interventions. Every. Single. Time.
 
Hands off the wheel is a long ways away, unless we want to accept more collisions as a result of AP usage of any flavor. Last year I had to do a quick evasive maneuver using AP was to avoid a large chunk of metal in the road. I had enough time to react as it came under the truck in front of me, but only because I had both hands on the wheel. Neither AP1 or AP2 is reliable enough at this point to pull such a move. AP1 would smoothly run right over it, and I suspect AP2 would do the same.

I disagree with Tacoma having a low population of Teslas. More than once I've gone past one of the bridges that causes issue while within sight of two other Teslas on the highway. At this point they have years of disengagement or override data. One of the spots is particularly frustrating because it's a slowdown as the car believes you're on the adjacent road, even though you're still on the freeway.

@wk075 you are making me want to get an action cam so I can record and show you a NoA session that does all those tasks properly. I've seen it plenty of times. I don't understand the discrepancy either and would really like to.
 
It's not clear to me what is causing this disparity,

I have two extremely strong suspicions.

1. Way down in their earliest neural nets, deep down in the earliest work they performed when MobilEye told them they were getting the boot, they botched the design and no matter how much training they give, it's still broken. At this point, replacing some of that early work would be equal to completely starting over and having to redesign major portions yet again. Andrej gave a conference talk that was mostly void of content except this tidbit where he noted that the "rewrite" elon kept talking about was actually stacking new modules on top of old modules with the eventual long term goal to replace the oldest components. I sincerely doubt that work has had nay forward progress, much like "smart" summon and AP given it's all hands on deck to get city streets AP to a point where they can recognize revenue.

2. There is a subtle crash in the traditional code portions of ADAS that causes the system to brake. Some sequence of probabilities from the sensor networks causes a hard to track down and hard to reproduce event which crashes the traditional driving code and you get hard braking. This would also explain why it's both radar and camera that trigger it, why it's not triggered by speed limit changes and why historically we haven't received the AEB audible alert.

In any event, I have zero hope that Tesla will fix this. Their software quality has gone down hill severely since they released the facelifted Model S, and the Model 3 introduction has only made things much worse. The number of outstanding bugs I'm waiting to be fixed on my 3 is crazy, and most of the bugs are multiple years old. They obviously only care about announcing their next product, and if you've already purchased from them they have no further use for you.
 
I have two extremely strong suspicions.

1. Way down in their earliest neural nets, deep down in the earliest work they performed when MobilEye told them they were getting the boot, they botched the design and no matter how much training they give, it's still broken. At this point, replacing some of that early work would be equal to completely starting over and having to redesign major portions yet again. Andrej gave a conference talk that was mostly void of content except this tidbit where he noted that the "rewrite" elon kept talking about was actually stacking new modules on top of old modules with the eventual long term goal to replace the oldest components. I sincerely doubt that work has had nay forward progress, much like "smart" summon and AP given it's all hands on deck to get city streets AP to a point where they can recognize revenue.

2. There is a subtle crash in the traditional code portions of ADAS that causes the system to brake. Some sequence of probabilities from the sensor networks causes a hard to track down and hard to reproduce event which crashes the traditional driving code and you get hard braking. This would also explain why it's both radar and camera that trigger it, why it's not triggered by speed limit changes and why historically we haven't received the AEB audible alert.

In any event, I have zero hope that Tesla will fix this. Their software quality has gone down hill severely since they released the facelifted Model S, and the Model 3 introduction has only made things much worse. The number of outstanding bugs I'm waiting to be fixed on my 3 is crazy, and most of the bugs are multiple years old. They obviously only care about announcing their next product, and if you've already purchased from them they have no further use for you.
hmmm .. dont think its clear to you either.
 
  • Like
Reactions: bhzmark
They obviously only care about announcing their next product, and if you've already purchased from them they have no further use for you.

While the rest of your post doesn't seem to make a lot of sense from a technical perspective (the short version is that the behaviors described don't have any path towards being able to be related to the causes you ascribe), the quoted part above is unfortunately completely true.

As for the actual reasons why these issues occur, the phantom braking thing, to me, seems like a standard garbage in/garbage out issue. I mean, you can tell how glitchy the actual data is just by sitting still and looking at all of the floating and glitching vehicles. If at some point the system gets a glitching input that shows something is dead ahead, even for a moment, the AEB part of the code (or the part that simply needs to slow quickly for TACC) will have to react, and it does. Pretty simple. The stability of the data coming out of AP2 is far worse than AP1 from a completely objective standpoint. As in, it is quantifiably worse. AP2 has more data but of low quality, AP1 has less data but higher quality. If AP1 thinks a car is ahead, there's a damn good chance there's a car ahead. AP2 on the other hand thinks my mailbox flowerbed area is a semi truck occasionally, so... not much faith there.

For the NoA issues, this just poor software. As far as I can tell, NoA is not machine learning. It's human written code using the input data available to make decisions. Those decisions are just poorly designed and implemented. I know this because I implemented my own "NoA" type setup on an AP1 car several years back. It didn't "navigate", but was made to auto-pass other vehicles and return to an appropriate lane with zero input. Since AP1 doesn't have much side information or rear information, I implemented this by having the car do a soft "ting ting" sound a couple seconds before it was going to initiate an auto lane change, at which point I as the human can look around and make sure it's safe, and if I do nothing to prevent it the car would change lanes by itself.

My implementation with AP1 took me a few weeks of off and on work to implement and tweak, and that's building on top of the existing system with my third party addon hardware, not even a direct implementation in the AP module itself. If I can do that with AP1, then I am baffled why Tesla can't implement something at least as usable on their own with full access to do so directly. That level of missing the mark is mind blowing to me.
 
  • Love
Reactions: kavyboy
While the rest of your post doesn't seem to make a lot of sense from a technical perspective (the short version is that the behaviors described don't have any path towards being able to be related to the causes you ascribe), the quoted part above is unfortunately completely true.

As for the actual reasons why these issues occur, the phantom braking thing, to me, seems like a standard garbage in/garbage out issue. I mean, you can tell how glitchy the actual data is just by sitting still and looking at all of the floating and glitching vehicles. If at some point the system gets a glitching input that shows something is dead ahead, even for a moment, the AEB part of the code (or the part that simply needs to slow quickly for TACC) will have to react, and it does. Pretty simple. The stability of the data coming out of AP2 is far worse than AP1 from a completely objective standpoint. As in, it is quantifiably worse. AP2 has more data but of low quality, AP1 has less data but higher quality. If AP1 thinks a car is ahead, there's a damn good chance there's a car ahead. AP2 on the other hand thinks my mailbox flowerbed area is a semi truck occasionally, so... not much faith there.

For the NoA issues, this just poor software. As far as I can tell, NoA is not machine learning. It's human written code using the input data available to make decisions. Those decisions are just poorly designed and implemented. I know this because I implemented my own "NoA" type setup on an AP1 car several years back. It didn't "navigate", but was made to auto-pass other vehicles and return to an appropriate lane with zero input. Since AP1 doesn't have much side information or rear information, I implemented this by having the car do a soft "ting ting" sound a couple seconds before it was going to initiate an auto lane change, at which point I as the human can look around and make sure it's safe, and if I do nothing to prevent it the car would change lanes by itself.

My implementation with AP1 took me a few weeks of off and on work to implement and tweak, and that's building on top of the existing system with my third party addon hardware, not even a direct implementation in the AP module itself. If I can do that with AP1, then I am baffled why Tesla can't implement something at least as usable on their own with full access to do so directly. That level of missing the mark is mind blowing to me.

The major difference is that when in motion, the data quality improves but from the cameras and the radar. The reason I suspect that one of the deeper networks is responsible for the main behavior is that we have seen this behavior persist since very early days on Hw2+ platforms.

So, to dive deeper into what I actually thing is going on, if you watch any of the video clips that Tesla or anyone else has produced with the debugging data enabled, take a look at the drivable space noted in the data. Even in Karpathy's latest tweet where he talks about the auto labeling nonsense, you clearly see the pink road coloring disappear from time to time. So. My hunch here is that deep down in their stack of networks, they are doing a very poor job of determining what is navigable space and what is not, the probability drops below a threshold and they brake.

The AEB stuff happening lately, that's almost certainly just their extremely naive and poor approach to depth vision given it happens much worse at night versus the day. I'm still waiting for someone to produce one of those voxel maps at night time, but so far nobody has obliged.

As for the crash hypothesis, we see symptoms of the AP system panicking somewhat frequently with very brief alerts. I've even had situations where the vehicle kept driving, didn't give me the take over immediately alert, but when I looked at the alerts triangle on screen it had the warning in there. The only symptom I experienced was that it performed poorly on a curve on the highway.

The stability of data comment you made is exactly what I'm talking about. The lowest level networks are outputting garbage, which the rest of the system interprets. If that data shows that the navigable path ahead ends abruptly, then obviously the vehicle is going to attempt to stop. Then the probability rises again, it meets the threshold to be considered navigable space, and it keeps going.

NoA is pure garbage and none of my comments related to NoA at all. That's just a steaming pile, and it has been since its release. They foisted that junk on us, and immediately started working on "smart" summon. Once that trash was released, they promised and update "in two weeks", which predictably never came. Then, this year, Andrej showed the whole world what those of us with concerns were complaining about. It was never going to be possible for their approach to ever produce a safe or usable smart summon system. The fact that there are reports of it working is the fluke, not that it doesn't work.
 
  • Funny
Reactions: bhzmark
Amazing how one's evaluation of the usability of the AP functions seem to depend so much on their expectations that they bring to their evaluation.

Some people will see what something does, and what it is doesn't do, and simply use what it does within the boundaries and limitations of what it can do.

Other people seem to bring some high expectations to their use and when it fails to meet those expectations they are outraged. and perhaps they even deny themselves the usability of what is there, even if it doesn't meet their expectations.

Some people refuse to use AP at all, even in the obvious use cases, such as bumper to bumper highway traffic or long road trips, because it doesn't drive like they do or otherwise meet their expectations. There is a level of that for other Tesla AP functions too. Summon does what it does and sometimes that is very useful.

Shrug. I suppose some people hate their Roomba because it doesn't vacuum like they would. I suspect such people score very low on Openness.
 
All of the AP1 vehicles far outperform any of the AP2.5 or AP3 vehicles on highway use. Hands down. I know it, the other people who use my vehicles know it, my passengers know it, etc. There's a clear distinction.

I keep hearing about this phantom braking thing, and I'm puzzled. AP1 behavior was great up until later releases where they turned up the paranoia on the radar and it started phantom braking all the time, and I sold that car before it ever stopped. Although I hear there haven't been any AP1 updates in many years, so that's that. AP3 has been a dream compared to that. After going to Tesla Vision starting with 10.3, I can only recall one time where there was phantom braking on the highway, and I don't think it was going to zero. I intervened immediately with throttle, didn't lose more than 5 mph, probably. Certainly better than it was in the 2 years prior to FSD beta. Longitudinal control closely resembles early AP1 builds now, IMHO.
 
  • Like
Reactions: JonNRb
The major difference is that when in motion, the data quality improves but from the cameras and the radar. The reason I suspect that one of the deeper networks is responsible for the main behavior is that we have seen this behavior persist since very early days on Hw2+ platforms.

So, to dive deeper into what I actually thing is going on, if you watch any of the video clips that Tesla or anyone else has produced with the debugging data enabled, take a look at the drivable space noted in the data. Even in Karpathy's latest tweet where he talks about the auto labeling nonsense, you clearly see the pink road coloring disappear from time to time. So. My hunch here is that deep down in their stack of networks, they are doing a very poor job of determining what is navigable space and what is not, the probability drops below a threshold and they brake.

The AEB stuff happening lately, that's almost certainly just their extremely naive and poor approach to depth vision given it happens much worse at night versus the day. I'm still waiting for someone to produce one of those voxel maps at night time, but so far nobody has obliged.

As for the crash hypothesis, we see symptoms of the AP system panicking somewhat frequently with very brief alerts. I've even had situations where the vehicle kept driving, didn't give me the take over immediately alert, but when I looked at the alerts triangle on screen it had the warning in there. The only symptom I experienced was that it performed poorly on a curve on the highway.

The stability of data comment you made is exactly what I'm talking about. The lowest level networks are outputting garbage, which the rest of the system interprets. If that data shows that the navigable path ahead ends abruptly, then obviously the vehicle is going to attempt to stop. Then the probability rises again, it meets the threshold to be considered navigable space, and it keeps going.

NoA is pure garbage and none of my comments related to NoA at all. That's just a steaming pile, and it has been since its release. They foisted that junk on us, and immediately started working on "smart" summon. Once that trash was released, they promised and update "in two weeks", which predictably never came. Then, this year, Andrej showed the whole world what those of us with concerns were complaining about. It was never going to be possible for their approach to ever produce a safe or usable smart summon system. The fact that there are reports of it working is the fluke, not that it doesn't work.
🤔
 
In the presentation Andrej gave recently about going to pure vision they showed exactly what sort of data caused phantom braking events. It's bad range finding data coming out of the vision or the radar. Eliminating the radar means they are left with one stack instead of two, so there is less to optimize. Absent adding lidar to get a third value that is probably the best choice available.
 
I have never experienced AP1, but in 2019 thanksgiving, I drove from southern NH to Orlando FL (Disney) mostly on AP (with NoA off), and I never had a single phantom braking event. True for both legs of the trip.

Also, contrary to a lot of reports here, now that I'm on vision-only due to FSD beta, I have fewer phantom brakes on my highway commute. With radar, I would get 1 or 2 phantom brakes due to an overpass or an overhead sign. Those don't happen at all anymore. The phantom braking I get on the highway now is when a car in an adjacent lane drifts toward me, and my car slows in anticipation of it coming into my lane. It doesn't happen often though.

That said, phantom braking on local roads is quite frequent on FSD beta. Sometimes it's due to tree shadows, but sometimes it's completely inexplicable.
 
  • Informative
Reactions: bhzmark
within the boundaries and limitations of what it can do.

Which, to be clear, is nothing useful. When you read the user manual, it basically tells you to be always ready for it to do the most dangerous thing at the most dangerous time, which makes it unusable.

Other people seem to bring some high expectations to their use

Well, when the CEO tweets that it's "superhuman", you're bound to get users that hold him to that claim. And since it's complete nonsense, we can easily disregard his other claims. Unless, of course, you've been operating your TeslaNet RoboTaxi fleet since 2018 after having your Tesla drive itself from coast to coast in the US in 2017 without anybody even in the car? If you've done those things, I'll gladly concede to you.
 
.. to you.

By definition. Remember the guy having his Y mark for him, and then it jumped the curb and hit a tree without warning? If you can't rely on the system being able to be stopped before something dangerous happens, then you can't rely on using the system at all. This is the problem with the naive approach Tesla has taken, and why they are the only manufacturer doing such a thing.

Compare to Mercedes Drive Pilot, which is a much more cautious rollout with strict limits on where physically it can be used as well as what speeds it can be enabled at. And, get this, when an emergency vehicle is detected, it requires you to take over. Crazy, right?
 
If you can't rely on the system being able to be stopped before something dangerous happens, then you can't rely on using the system at all.
In which case you need to ban seat belts, which jam trapping you in the car, and airbags that can trigger when not needed and cause a crash. And gas engines which can cause fires after s crash. Oh and airplanes. And the list goes on…

Saying something should be banned because it caused a single accident is ridiculous, EVERYTHING can do that, so why are you singling out the car?
 
In which case you need to ban seat belts, which jam trapping you in the car, and airbags that can trigger when not needed and cause a crash. And gas engines which can cause fires after s crash. Oh and airplanes. And the list goes on…

Saying something should be banned because it caused a single accident is ridiculous, EVERYTHING can do that, so why are you singling out the car?

This is a nonsense response to an actually serious issue. People like you have been making these foolish excuses for a long, long time now. When Elon calls it "superhuman", and people lap it up, all I can do is shake my head because I know in a year he's going to have another revolutionary event on stage where he divulges that all of the previous work was fundamentally broken but this time they've got it for certain.

As for "caused a single accident", how about 10 confirmed AP deaths? I mean. It's useless trying to convince you even though there are technical dissections, researcher discussions, and industry experts telling you that you're wrong. And my favorite story so far Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars
 
This is a nonsense response to an actually serious issue. People like you have been making these foolish excuses for a long, long time now. When Elon calls it "superhuman", and people lap it up, all I can do is shake my head because I know in a year he's going to have another revolutionary event on stage where he divulges that all of the previous work was fundamentally broken but this time they've got it for certain.

As for "caused a single accident", how about 10 confirmed AP deaths? I mean. It's useless trying to convince you even though there are technical dissections, researcher discussions, and industry experts telling you that you're wrong. And my favorite story so far Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars
Of course it is nonsense, that was my point. And it's equal nonsense to say "it caused a serious accident, therefore if is unusable" which is literally what was said. As for 10 deaths, how many deaths are caused each year by faulty seatbelts? How many lives were saved? A statistic like 10 deaths, terrible and sad though that is, is meaningless outside of context. Antibiotics occasionally cause deaths. Should we stop using them? Of course not, because the benefit far outweighs the risk. If you want to argue that the risks of AP outweigh the benefits, that's fine, make that argument. Show that those deaths were not counter-balanced by a reduction in severe or fatal accidents. Or speculate that Tesla or the NHTSA or someone needs to try to determine that information, or provide guidelines for HOW to determine that. All that is fine and would advance the dialog immensely, but just presenting one side of the argument undermines your credibility.

As for "people like me making foolish excuses", you seem to think that anyone who doesnt agree with you is automatically a fan-boy incapable of independent thought. Sorry, but nope.
 
  • Funny
Reactions: DrDabbles
Here's what makes me the most worries about this stuff: My original post opening this thread was three years ago... while a few things on that original bullet list have had some minor improvements, the feature as a whole (NoA) is still pretty much the same as it was then, with most of the same flaws.

It's baffling to me that after 7 years of promises, we still don't even have hands free on-ramp to off-ramp. (And yes, by hands free I mean hands free unless the car says it needs me, not nag me every 10-30 seconds to tug the wheel.)
 
Shrug. I suppose some people hate their Roomba because it doesn't vacuum like they would.

Some things either work as intended or they don't.

For example I have both a Roomba, and a Dyson Eye Bot Vacuum cleaner.

I got the Roomba right after being extremely let down by the EyeBot. I didn't return the Eye Bot because it had just been released to the market, and I figured it had potential with future SW updates.

After the two faced off and the Roomba proved its total superiority I moved the EyeBot upstairs. Partly it was because it was a smaller area. and partly because I was hoping it would fall down the stairs and die. It never did fall down the stairs, and seems perfectly content vacuuming a small area.

At some point over a year later I got a refund for the EyeBot that I never asked for. I emailed Dyson telling them that I never requested it, and they said they must have refunded me by mistake but they never reversed it.

The EyeBot never amounted to much as its early failures doomed it. and the updates that were supposed to magically make it better never materialized.

Much of the same can be said of NoA where I bought it anticipating that it would work as advertised, but in 3+ years of having EAP+FSD that's never happened.

There isn't a single item to blame for this, and instead its a mix of reasons.

The risk of Phantom braking reduced a lot of the tranquility that was at the heart of what NoA would be
Navigation/Maps issues meant that there were areas that I couldn't use it. Some were obvious where I don't really have an expectation of it, and others I found rather surprising due to how long the road has stayed the same.
For a long time AP would re-center a merge points which drove me nuts. Thankfully they mostly fixed that in recent builds
To this day I don't see AP slowing down for people changing lanes in front of me. Like it doesn't consistently give way when someone is merging into my lane.
NoA can be downright rude about when it wants to change lanes. Much of the logic behind lane changes baffles me.
 
Last edited:
  • Like
Reactions: Sharps97