Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Just Drove My Model Y in Light Rain at Night for 1st Time with "Tesla Vision" (2022.36.6): Autopilot was **Unusable**

This site may earn commission on affiliate links.
All:

I just drove my Model Y for the first night time drive in light to moderate rain for the 1st time since accepting an update to one of the supposed "Tesla Vision" builds. Ooooh boy...it's not looking good for Tesla.

Note: I recently jumped from software version 2022.20 to 2022.36.6. The vehicle has radar, but this build supposedly uses the radar for almost nothing at this point. Greentheonly (who is also on this site) isn't convinced it's *never* used but we can argue that in one of the open threads.

Anyway, I was on a portion of a major highway with good lane markings -- I66 eastbound in Northern Virginia -- driving in light to moderate rain. Definitely not a torrential downpour.

I'm not exaggerating when I say the vehicle was nearly **un-driveable** with Autopilot engaged under these circumstances. I'm completely stunned by how bad it was and how I had to nurse the vehicle constantly to get any Autopilot features to even work or agree to engage.

Summary:

- Far more frequent occurrences of the "XXX Camera is blocked" notification than ever before. This was rare for me previously. On this drive, it was consistently on for the entire 30 minute drive despite no cameras being physically blocked, just typical light to moderate rain.

- Near constant "Autopilot speed limited by limited front camera visibility" messages. It wouldn't let me set the Autopilot cruise control speed more than 60 or 65 MPH, but inconsistently.

- Wouldn't honor whatever speed it reluctantly agreed to. People were passing me left and right well >65 MPH, but my Model Y was slowing down below *40 mph* and slowing. I had to continually step in and put my foot on the accelerator to get it to honor whatever speed it had agreed to. There wasn't anyone in front of me that would've forced the slow down. I kept wondering WTF was going on.

- The "Regen braking limited" notification stayed on the entire drive due to lowish temperatures, but it wasn't *that* cold. Note that I drove our Volvo XC40 later last night on a grocery run and had full regenerative braking at even colder temperatures.

- Forcing us accept "auto high beams" and "auto wipers" when Autopilot is engaged is **clearly** a disaster and neither worked reliably. The high beams were constantly coming on and off the entire drive inconsistently, often flashing people when I drove under overpasses. The wipers were *continually* set poorly. I had to manually fight both of these the entire drive, a non-ideal distraction when driving in the rain at night.

I'm utterly *p*ssed* about the continually degrading interface and performance of the vehicle. I could sell it for at least $10K *profit* at this point, and am strongly considering it.

I mean...wow. Holy &^%$# it's bad. Tesla needs to try and salvage this instead of wasting time on nonsense like "light shows" and fart apps. It's awful.

- B

48584087151_afe3603cfc_k.jpg

"E-Cars parked at Tesla Supercharging Stations in Germany, under cloudy sky" by verchmarco is licensed under CC BY 2.0.
(Image added by admin for purpose of TMC Blog)
 
Last edited:
To think that the car has better judgement than I do is the absolute peak of hubris and is categorically wrong.
The only way to really know for sure is to look at the numbers. People tend to believe they're amazingly skilled and safe drivers, but the numbers don't lie. By and large driving cars is very VERY unsafe - huge amounts of severe injuries and fatalities, and nearly always it's human error that causes it. I'm not sure if there are published stats yet about the various levels of autonomy and how they are either improving or degrading driving safety. But it shouldn't be hard to do. I wouldn't be surprised to find out that thinking the car has better judgement than I do is not hubris at all, and is categorically (well at least statistically) right, not wrong.
 
The only way to really know for sure is to look at the numbers. People tend to believe they're amazingly skilled and safe drivers, but the numbers don't lie. By and large driving cars is very VERY unsafe - huge amounts of severe injuries and fatalities, and nearly always it's human error that causes it. I'm not sure if there are published stats yet about the various levels of autonomy and how they are either improving or degrading driving safety. But it shouldn't be hard to do. I wouldn't be surprised to find out that thinking the car has better judgement than I do is not hubris at all, and is categorically (well at least statistically) right, not wrong.
Technically, the numbers for FSD are misleading because of winner’s bias. FSD is available in much smaller set of circumstances than where humans drive. Also, technically, every time FSD gives up and human takes over should be counted as accident. If the human fails there is no one to take over and save the situation.

Even if the technical challenges are solved, there are a lot of ethical problems that we haven’t even touched. For example, would you buy a car that has a bias to “sacrifice” the passenger over other people? And how would you even know about such bias?
 
  • Like
Reactions: Battpower
Thank you for pointing this out, I had recalled reading this somewhere but couldn't remember where. Turns out, it was from my Oregon driver's license test 20 years ago (I'm sure others, do too):

Oregon Driver Manual - Section 7: Safe and Responsible Driving, subsection Hazardous Conditions:
Rain:
"Do not use cruise control in wet conditions."
Snow and Ice:
"Keep windows clear of snow, ice and fog, and do not use cruise control."

If Tesla disagrees with those organizations and believes TACC/AP/FSD can operate in those conditions, they still have a lot of work to do.
Cruise control and FSD will handle light or even heavy rain, that’s not the problem.
Aquaplaning is the problem, and it’s not a vision thing.
 
Yes, I mis-spoke.

What I was trying to say is that the thing wouldn't cruise control properly and kept slowing down continuously, while limiting the set speed due to "Poor front camera visibility". When I used full Autopilot, it would do better at keeping up speed, but still wouldn't let me set any speed consistently.

Moreover, this is a clear indication that removing radar -- and not using it in vehicles that still have it -- is impacting features. I *never* got the "Autopilot speed limited due to poor camera visibilty" message in over two years of owning the vehicle...until accepting a firmware build that discards the radar data in most circumstances.
Mine doesn't have a radar and it gives me that message.
 
Technically, the numbers for FSD are misleading because of winner’s bias. FSD is available in much smaller set of circumstances than where humans drive. Also, technically, every time FSD gives up and human takes over should be counted as accident. If the human fails there is no one to take over and save the situation.

Even if the technical challenges are solved, there are a lot of ethical problems that we haven’t even touched. For example, would you buy a car that has a bias to “sacrifice” the passenger over other people? And how would you even know about such bias?

I have an issue with classifying any and all FSD disengagements as accidents. Just because you disengaged, or it disengaged doesn't mean there would have been an accident. There is a difference between reacting to a circumstance vs making a mistake. Tesla FSD, at this point, disengaging because it doesn't know how to handle a situation cannot be treated as an unacceptable error or even counted as an accident. It did what it was supposed to do(at this point). Right now, the car is supposed to hand over control to the human driver when it can't deal with a situation. The only way you can count those as accidents would be maybe if you are evaluating the FSD software from a Level 4/5 perspective which is not what we are doing right now.

If for example when the car started seeing red lights it just disengaged FSD because it didn't know what else to do in that situation, would you have considered that an accident?

If you want to could every and all disengagement as an accident, then from a scientific testing perspective you would also need to account for all the various human driving behaviors that could have caused an accident. Every late braking even for a red light, every roll through a stop sign on a 2-way only stop intersection, every text message while driving...etc. Unfortunately it is easier to bias against or for Tesla FSD because everything is recorded whereas it is VERY hard to get the human data.
 
I have an issue with classifying any and all FSD disengagements as accidents. Just because you disengaged, or it disengaged doesn't mean there would have been an accident. There is a difference between reacting to a circumstance vs making a mistake. Tesla FSD, at this point, disengaging because it doesn't know how to handle a situation cannot be treated as an unacceptable error or even counted as an accident. It did what it was supposed to do(at this point). Right now, the car is supposed to hand over control to the human driver when it can't deal with a situation. The only way you can count those as accidents would be maybe if you are evaluating the FSD software from a Level 4/5 perspective which is not what we are doing right now.

If for example when the car started seeing red lights it just disengaged FSD because it didn't know what else to do in that situation, would you have considered that an accident?

If you want to could every and all disengagement as an accident, then from a scientific testing perspective you would also need to account for all the various human driving behaviors that could have caused an accident. Every late braking even for a red light, every roll through a stop sign on a 2-way only stop intersection, every text message while driving...etc. Unfortunately it is easier to bias against or for Tesla FSD because everything is recorded whereas it is VERY hard to get the human data.
One cannot claim that something is safer than human if it relies on a human when “it gets tough”. A human driver does not have a backup plan and deals with wider range of scenarios so it will have more accidents. That is why the data from Tesla is misleading. To be accurate they must compare apples to apples - same roads and conditions for FSD and humans. Unfortunately, that data may not exist at scale to be statistically meaningful.

If anything, the reports can attest to how safe the driver assistant feature is when engaged, i.e. it does not make mistakes in its realm of scenarios. But they cannot compare its safety to a human driver.
 
  • Like
Reactions: Just a Reader
But you can compare human alone vs. human plus ADAS. That is, do the ADAS features make travel safer overall?

That is hard too because of the disengagement theory. I started to write a whole post but don't really want to get into a back and forth talking about all the little technicalities and if's/and's/but's of what @Boza is trying to say.

As with lots of other debates about various numbers and accident or crime rates, there are lots of pieces of data that are not(or cannot sometimes) gathered that would help to come to a reasonable conclusion. We don't have the data. The data we do have on this subject does I think say somethings but you have to be careful how much you really want to say about it so you don't read too much into it.
 
  • Like
Reactions: Boza
That is hard too because of the disengagement theory. I started to write a whole post but don't really want to get into a back and forth talking about all the little technicalities and if's/and's/but's of what @Boza is trying to say.

As with lots of other debates about various numbers and accident or crime rates, there are lots of pieces of data that are not(or cannot sometimes) gathered that would help to come to a reasonable conclusion. We don't have the data. The data we do have on this subject does I think say somethings but you have to be careful how much you really want to say about it so you don't read too much into it.
I agree - we're dealing with massive numbers, and sometimes all we can do is look at statistical trends to see if there is a difference. We're talking about millions of cars, billions of drivable miles, hundreds of thousands of accidents, tens of thousands of deaths per year. As long as vehicles are tracking telemetry to know when they are using ADAS features, we can look at historical trends to see if there is any improvement.

I know many people here do not trust any data coming from Tesla (would they trust any data coming from any car company?), but Tesla was showing safety data since they have telemetry from their cars. They could see when Telsas get into accidents (the car always phones home when it's in a collision), and they can see when cars are using ADAS features. Their numbers were showing that cars driving on AP had fewer accidents than cars that were not driving on AP. However, those numbers aren't always representative of real-world driving. It may very well be that people use AP while on basic drives where there's not much chance of something going wrong, vs driving manually in difficult scenarios like mountain roads where accidents are more likely.

As more car companies produce ADAS features, we'll have to look at the statistical trend from NHTSA year-over-year to see if technology is saving lives. I'm optimistic it is.
 
  • Like
Reactions: Phlier and Boza
Most likely - yes. Anything that reduces the load on the human should be beneficial - see the aerospace examples.

But that is different than “FSD is safer than human” claim.

The problem is that there is no DATA to really refute the claim. There is data to make the claim but there is additional data that would be needed to actually refute the claim.

Now I think you can look at the existing data and say that driving a Tesla is less accident prone/"safer" than the average(NHTSA value) but I bet you could cherry pick a lot of different individual or sets of vehicle models and say that the data for them beats the NHTSA average as well...but again that doesn't really tell the whole story because it isn't necessarily directly related only to the actual vehicle.
 
The problem is that there is no DATA to really refute the claim.
It does really work that way :) They have to provide solid data to support the claim. Currently, the data is not there.

Don’t get me wrong - I love the FSD as an assistant feature. I regularly drive 200+ miles on a highway and definitely feel less tired because of it. From that perspective (as a load reducer) it does work and I am sure that they could come up with objective data to prove it. But as a safer replacement for human driver? Doubt it!
 
It may very well be that people use AP while on basic drives where there's not much chance of something going wrong, vs driving manually in difficult scenarios like mountain roads where accidents are more likely.
You make an interesting point. We can only use AP on a small part of our driving and that’s typically on highways for hundreds of miles that are basically straight. Probably 5x a year. Low accident rates on these types of roads according to government reports.

Whereas the majority of accidents happen, according to stats, is where the bulk of our driving is. Narrow country roads, wet or icy most of the year, no street lighting, limited road markings and low visibility with fog - happens regularly. Sunny days are intermittent during the month as we’re on the North coast and fairly elevated. It’s not unusual to see cars on the side of the road that have lost control. AP doesn’t work in these conditions or style of roads. I regret buying EAP but that’s a different topic.
 
It does really work that way :) They have to provide solid data to support the claim. Currently, the data is not there.

Don’t get me wrong - I love the FSD as an assistant feature. I regularly drive 200+ miles on a highway and definitely feel less tired because of it. From that perspective (as a load reducer) it does work and I am sure that they could come up with objective data to prove it. But as a safer replacement for human driver? Doubt it!

Well actually, they only have to provide just enough data to make their claim, then everyone reading the claim that doesn't believe it has to be able to make an informed dispute of that claim to be able to put the claimant back in the position to provide more data to re-support the claim. Just because someone says something doesn't make it true, and conversely just because you don't believe it doesn't make it not true.

I could say that 9 out of 10 people in this thread believe the existing numbers say what Tesla says they say. Is that true or not? I say it's true but from a scientific analysis standpoint you can't just go "nuh uh." The data in the thread may or may not support my claim, I say it does, but it is now on you and others to prove it otherwise. This thread data may even only support my claim if I read certain people's comments in a light most favoring to my viewpoint and then we have an impasse because you read them the other way...and then were are in the scenario that the data is not there to technically support my claim or not because there is not enough good quality data.

Maybe that is where we are with Tesla's data. Without thinking on it too much right now I could probably jump on board the idea that yes Tesla's numbers may say something positive about Tesla and/or FSD but that it may not be saying what Tesla is saying it says....because of the lack of better clarifying data, BUT we don't have that data so it is harder to technically refute the claim outright.

People used to believe the earth was flat too...until it was proven otherwise.