Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Autopilot goof. Good thing I was paying attention :)

This site may earn commission on affiliate links.
This isn't what I see on local two lane roads (one lane each direction). AP display shows only one lane (the one I am in). It ignores the oncoming lane(s) completely. Therefore "it just sees lanes" doesn't seem to be accurate. Now obviously it is using lane lines to (largely) determine which lane(s) are available to you and which to ignore (i.e. which are oncoming lanes).

The issue is that while AP will ignore the lane on the other side of the divider, it does not know if you are in the correct lane. So for example, if you were to cross over the divider and reengage AP, it will center you in that lane even though it is the lane for incoming traffic. It will not move you back over to the correct lane. So AP does not know that one lane is right and that other lane is wrong. It will center you in whatever lane you happen to be in. That's why if the divider lane temporarily disappears, AP may accidentally recenter you in the lane for incoming traffic.
 
I fear that as autopilot continues to improve, this attitude that it's driving the car instead of you will get worse.

Experts in the field also fear this. It may result in a paradoxical increase in accident rates as AP/FSD gets better (at least until hitting a crossover “FSD utopia” point). Guess we are going to find out!

Actually, we won't find out, because no one will bother to gather enough unassailable data comparing human drivers, current AP, and future AP/FSD, to be able to draw any conclusion whatsoever, and people will argue about it on the internet for years.
 
Last edited:
  • Funny
Reactions: nvx1977
I use AP on long stretches of non-freeway roads all the time. But I'm paying attention every second. It's not like those of us that do this are carelessly ignoring what's happening. Being able to enable AP and see how far it can go is like Beta Testing and we mentally agree to the terms and conditions. haha
 
While on a trip from the Central Valley of California to Monterrey, California, we were travelling west on State Route 152 west of Los Banos and going up the grade to Pacheco Pass. This is a four lane section of divided highway. I had our Model 3 Enhanced Auto-Pilot engaged. As we were going around a left curve in the road on the upside of the grade, our Model 3, which was in the #2 lane (the right lane) seemed to be having difficulty maintaining lane position and as it went around the curve, it got closer and closer to the lines dividing the #1 and #2 lanes. Unfortunately, there was a vehicle in the #1 lane that was directly opposite us. When our Model 3 got so close that it was probably within a few inches of the other car, the autopilot suddenly beeped a warning tone and I had to take control to move the car back into the proper lane position within the #2 lane. I'm sure the people in the car next to us in the #1 lane thought maybe I needed to have my driver's license status re-evaluated. They did not look happy. It was a bright, clear, sunny day, so weather wasn't a factor.

I've had a few similar incidents but none where the car came quite so close to making contact with another vehicle.

My point, I suppose, is that while my wife and I both love the Enhanced Auto-Pilot features and we use it often, we have definitely experienced enough "glitches" with it that we don't fully trust it and always pay close attention while it is engaged -- which, I know, is exactly what Tesla recommends. It is certainly nowhere close to what Full Self Driving capability needs to be.
 
While on a trip from the Central Valley of California to Monterrey, California, we were travelling west on State Route 152 west of Los Banos and going up the grade to Pacheco Pass. This is a four lane section of divided highway. I had our Model 3 Enhanced Auto-Pilot engaged. As we were going around a left curve in the road on the upside of the grade, our Model 3, which was in the #2 lane (the right lane) seemed to be having difficulty maintaining lane position and as it went around the curve, it got closer and closer to the lines dividing the #1 and #2 lanes. Unfortunately, there was a vehicle in the #1 lane that was directly opposite us. When our Model 3 got so close that it was probably within a few inches of the other car, the autopilot suddenly beeped a warning tone and I had to take control to move the car back into the proper lane position within the #2 lane. I'm sure the people in the car next to us in the #1 lane thought maybe I needed to have my driver's license status re-evaluated. They did not look happy. It was a bright, clear, sunny day, so weather wasn't a factor.

I've had a few similar incidents but none where the car came quite so close to making contact with another vehicle.

My point, I suppose, is that while my wife and I both love the Enhanced Auto-Pilot features and we use it often, we have definitely experienced enough "glitches" with it that we don't fully trust it and always pay close attention while it is engaged -- which, I know, is exactly what Tesla recommends. It is certainly nowhere close to what Full Self Driving capability needs to be.

Out of curiosity, what speed had you set it at?
 
I allowed it to carry on with its maneuver to see if it could figure out what to do. Alas, it did not.
So the end of the video was showing you manually pulling it back to the right side of the road?

I also use EAP "everywhere it'll engage". I've not seen an incident of that exact behavior, although I had it get faked out by an curbed approach, that was fully perpendicular to the road I was traveling, that it tried to dive into and then set off it's alarm saying it was confused and thought it was going to crash (that was right :p). I manually brought it back on course, because yeah "always keep your hand on the wheel".

The weird part is the road I was traveling is very straight and reasonably marked, at least in my mind. I don't have a USB installed, so don't have video of it. I haven't gotten around to going back to try recreate the
behavior (I R programmer :rolleyes:), and haven't studied the location that closely, so I'm not sure what it was. The area has also changed slightly now, as it was something of a construction area but it was so longterm and well marked.

P.S. Note, there is one place where I travel at least weekly that is marked enough for EAP to engage, and then as you come through a turn the markings disappear entirely (rural TX FTW!) and EAP holds the 'lane' correctly for about a 1/4 mile, avoiding the wrong side of the road and the on-the-edge-of-the-pavement mailboxes (I mentioned rural TX FTW?), chugging along until I have to engage the brakes for a Stop sign. EAP wasn't able to do this when I got the car last fall but somewhere along the line the lane reading got good enough that it could use raw pavement, as long as it was already engaged.
 
Last edited:
Looking at the video again it wouldn't surprise me if the concrete barrier on the <edit>right</edit> wasn't a significant factor leading to the incorrect assessment. EAP is still a little squirrelly around those.
 
Last edited:
Experts in the field also fear this. It may result in a paradoxical increase in accident rates as AP/FSD gets better (at least until hitting a crossover “FSD utopia” point). Guess we are going to find out!

Actually, we won't find out, because no one will bother to gather enough unassailable data comparing human drivers, current AP, and future AP/FSD, to be able to draw any conclusion whatsoever, and people will argue about it on the internet for years.

So far, so good though on the "drivers staying alert, and engaged" front. And it is being studied already, this has years of data gathering behind it:

https://hcai.mit.edu/tesla-autopilot-human-side.pdf
 
So far, so good though on the "drivers staying alert, and engaged" front. And it is being studied already, this has years of data gathering behind it:

https://hcai.mit.edu/tesla-autopilot-human-side.pdf

It is being studied, but it does not have years of data gathering (with a capable Autopilot) behind it, and the author would be very skeptical about drawing this conclusion from his study. (He has posted here recently, you could ask for his summary, or you could look at the excerpts below.) Indeed, the problem with alertness/engagement gets worse the better the system gets (potentially - it's a hypothesis).

I've responded to this elsewhere, reposting the gist of my prior post here,
What the chances Tesla cars will be self driving in 3 years? Why do you think that way?:

-----
The paper is very very clear about the limited scope and how the results are unlikely to be able to be extrapolated to more capable systems. In a very specific situation, the 21 drivers in the study seemed to stay engaged and maintain good awareness when using AP. There are a number of possible reasons for this discussed in the paper. I recommend reading it through.


“...the Autopilot dataset includes 323,384 total miles and 112,427 miles under Autopilot control. Of the 21 vehicles in the dataset, 16 are HW1 vehicles and 5 are HW2 vehicles.
The Autopilot dataset contains a total of 26,638 epochs of Autopilot utilization...

…these findings (1) cannot be directly used to infer safety as a much larger dataset would be required for crash-based statistical analysis of risk, (2) may not be generalizable to a population of drivers nor Autopilot versions outside our dataset, (3) do not include challenging scenarios that did not lead to Autopilot disengagement, (4) are based on human-annotation of critical signals, and (5) do not imply that driver attention management systems are not potentially highly beneficial additions to the functional vigilance framework for the purpose of encouraging the driver to remain appropriately attentive to the road…

…Research in the scientific literature has shown that highly reliable automation systems can lead to a state of “automation complacency” in which the human operator becomes satisfied that the automation is competent and is controlling the vehicle satisfactorily. And under such a circumstance, the human operator’s belief about system competence may lead them to become complacent about their own supervisory responsibilities and may, in fact, lead them to believe that their supervision of the system or environment is not necessary….The corollary to increased complacency with highly reliable automation systems is that decreases in automation reliability should reduce automation complacency, that is, increase the detection rate of automation failures….

…Wickens & Dixon hypothesized that when the reliability level of an automated system falls below some limit (which the suggested lies at approximately 70% with a standard error of 14%) most human operators would no longer be inclined to rely on it. However, they reported that some humans do continue to rely on such automated systems. Further, May[23] also found that participants continued to show complacency effects even at low automation reliability. This type of research has led to the recognition that additional factors like first failure, the temporal sequence of failures, and the time between failures may all be important in addition to the basic rate of failure….

….We filtered out a set of epochs that were difficult to annotate accurately. This set consisted of disengagements … [when] the sun was below the horizon computed based on the location of the vehicles and the current date. [So all miles are daytime miles]

Normalizing to the number of Autopilot miles driven during the day in our dataset, it is possible to determine the rate of tricky disengagements. This rate is, on average, one tricky disengagement every 9.2 miles of Autopilot driving. Recall that, in the research literature (see§II-A), rates of automation anomalies that are studied in the lab or simulator are often artificially increased in order to obtain more data faster [19] such as “1 anomaly every 3.5 minutes” or “1 anomaly every 30 minutes.” This contrasts with rates of “real systems in the world” where anomalies and failures can occur at much lower rates (once every 2 weeks, or even much more rare than that). The rate of disengagement observed thus far in our study suggests that the current Autopilot system is still in an early state, where it still has imperfections and this level of reliability plays a role in determining trust and human operator levels of functional vigilance...

...We hypothesize two explanations for the results as detailed below: (1) exploration and (2) imperfection. The latter may very well be the critical contributor to the observed behavior. Drivers in our dataset were addressing tricky situations at the rate of 1 every 9.2 miles. This rate led to a level of functional vigilance in which drivers were anticipating when and where a tricky situation would arise or a disengagement was necessary 90.6% of the time…..

.In other words, perfect may be the enemy of good when the human factor is considered. A successful AI-assisted system may not be one that is 99.99...% perfect but one that is far from perfect and effectively communicates its imperfections….

...It is also recognized that we are talking about behavior observed in this substantive but still limited naturalistic sample. This does not ignore the likelihood that there are some individuals in the population as a whole who may over-trust a technology or otherwise become complacent about monitoring system behavior no matter the functional design characteristics of the system. The minority of drivers who use the system incorrectly may be large enough to significantly offset the functional vigilance characteristics of the majority of the drivers when considered statistically at the fleet level.”
 
The paper is very very clear about the limited scope and how the results are unlikely to be able to be extrapolated to more capable systems.
Thus "So far". :) Yeah, yet it was widely posited that it'd be (and even asserted it was already) a major issue right from the beginning. So even with this caveat this was pretty good news.

<edit> So we aren't there yet....and also, if anything, Tesla is getting more strict on its "nags". I expect the "nags" are a pretty good stand-in for "imperfections". Basically those nags are actively, artificially undermining confidence.
 
Thus "So far". :) Yeah, although it was widely posited that it'd be (and even asserted it was already) a major issue right from the beginning. So even with this caveat this was pretty good news.

Yeah, we can hope. My feeling is, though, just based on human nature, that it's a pretty reasonable hypothesis that once things get good enough, this is going to start to be a problem. Right now it's nowhere near good enough - you'd have to be certifiable to take your eyes off the road or your hands off the wheel even for an instant. I'd actually be more comfortable doing those things (which is not to say comfortable at all!) when I DON'T have EAP engaged...maybe. Tough call.
 
Yeah, we can hope. My feeling is, though, just based on human nature, that it's a pretty reasonable hypothesis that once things get good enough, this is going to start to be a problem. Right now it's nowhere near good enough - you'd have to be certifiable to take your eyes off the road or your hands off the wheel even for an instant. I'd actually be more comfortable doing those things (which is not to say comfortable at all!) when I DON'T have EAP engaged.
I ninja-edited about why I think the presence of nags, even their increasing stringency, gives reason to hope we bridge the gap until FSD is good enough to be better than human.
 
Basically those nags are actively, artificially undermining confidence.

Yes, they're part of a comprehensive driver attention management system. I think Fridman would tend to say that they are crucial for maintaining safety as the system gets more capable. (Item 5 in his list of things relating to the limitations of his findings.)
 
  • Like
Reactions: SammichLover
Yes, they're part of a comprehensive driver attention management system. I think Fridman would tend to say that they are crucial for maintaining safety as the system gets more capable. (Item 5 in his list of things relating to the limitations of his findings.)
I don't know if you've used the NoAP w/o the acknowledgement requirement yet, but I've tried it out some now. As someone that assessed NoAP originally as junk for me, I've been impressed with the improvement in this iteration. But I've also found it isn't happy about me using the "adjust the volume thumbwheel" cheat, at least at higher speeds I was driving at last night (on a 75mph limit, divided highway, so my set point was 80+ most of the way) it seems I had to provide steering wheel tension for it to engage in a lane change it wanted.
 
I don't know if you've used the NoAP w/o the acknowledgement requirement yet, but I've tried it out some now. As someone that assessed NoAP originally as junk for me, I've been impressed with the improvement in this iteration. But I've also found it isn't happy about me using the "adjust the volume thumbwheel" cheat, at least at higher speeds I was driving at last night (on a 75mph limit, divided highway, so my set point was 80+ most of the way) it seems I had to provide steering wheel tension for it to engage in a lane change it wanted.

I've heard this, but no opportunity to try yet - fortunately I do not even have to get on a freeway for my commute. I am very curious about just how much torque has to be exerted - I've got it all set up and ready to go for my next freeway drive, but no chances over the last couple weeks. I guess I should have bought FSD, as I'm sure that will work brilliantly for my slog through multiple traffic lights. ;) There are so many Model 3s headed to Qualcomm in the morning on that drag, it will be interesting, especially if they all have FSD. In fact, I hope they all bought FSD; it's going to be pretty easy to push those poor computers around.
 
  • Funny
Reactions: SammichLover
Wow crazy! i think some of these are just going to need to be reported to tesla (voice command: Report a problem with ...). I did that over and over again for a curve in my area close to an intersection where the markings disappear (nothing as bad though as the example in the video shown)

somewhere in the last 6-12 months they fixed it (I don't think they suddenly improved the AI, I bet someone at Tesla can manually review and place some overrides on the map??)

How would AP1 handle that same situation one wonders?
What is there to report....."Hey Tesla your AP doesn't work in a situation that the owners manual says isn't supposed to be used in"? They know that it doesn't work as well off the highway, that's why they say to use it on the highway only...... That's like having summon go through the drive through at Starbucks and then complaining when it crashes into the order menu. I know people are going to be like "I use it all the time off highway it should work". Well not with a high enough reliability rate according to the manufacturer that would allow them to feel comfortable enough to tell you its ok. So don't be surprised or freaking out when it doesn't work. It's working as advertised. Period. Now situations where it doesn't work on the highway are an altogether different problem.
 
Yeah, mine does stuff like that too.
Whenever it perceives a lane split it gets confused and often takes the left lane (the inside "fast lane" or, as in this case, the oncoming traffic lane). :eek:
It should default to the safer "slow lane" on the right - but then it might take some wrong exits, won't it? :rolleyes:
Plenty of learning to do... Need a "Beware of Student A.I." sticker. :D