Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
I don't know either. All systems that rely on a single person being reliable are ultimately unreliable.

I like the idea of windshield projected or on display commands. Press left switch twice. Press right switch once. Tap accelerator. Gives positive feedback of attention. But how often to do it?

Positive feedback could work but would likely run people off because the average driver wants it to be all or nothing. If the wreck gives any indication it takes only a few seconds of not paying attention to be killed.
 
Life or Death decision point with attentiveness and quick reaction required.

death-life.png


( Note, your chances of living on the left path are much greater if the crash cushion had been reset for you... )
 
Last edited:
  • Like
Reactions: FlatSix911
It was not confused. It is a machine. AP did not 'put' the car in a dangerous situation, it followed its programming.

Of course AP put the car into a dangerous situation. AP was what moved the car into the gore (which is definitely not where a car should be). I can see arguing that the vehicle's operator should have noticed this behavior and overridden it by steering out of the gore or slowing down. But there's no way to argue that AP didn't move the car from a proper location on the road (the fast lane) to an improper location (the gore).

My feeling is that a product that behaves in this way isn't ready for use on the public roads. Others on the board seem to think that this sort of behavior is acceptable (though by no means ideal) because the driver has the ability to override AP's bad decisions. But I think almost anyone would agree that in this case AP (meaning TACC + AS) failed in its sole task-- which is to keep the car in its lane. And this failure clearly created a dangerous situation.
 
The main follow-up question is why didn't the driver see the upcoming barrier when the lead car moved right and his car kept going straight?
Maybe not looking ahead? Maybe morning sun in the eyes? Maybe the barrier isn't marked well enough?

The driver only had something like five seconds to react.

His failure to react might have been because he was using the system in an abusive manner (ie texting or otherwise playing with his phone or with food while driving).

But the fact that he did have his hands on the wheel for most (and possibly as much as all) of the minute leading up to the crash suggests to me that it is more likely that his attention lapsed for the reasons why all drivers' attention lapses from time to time: day dreaming, playing with the audio or climate controls, getting distracted by a billboard, getting distracted by something some other driver is doing, etc.

Both with and without AP, drivers become momentarily distracted in this manner pretty frequently, especially when they are on routes that they drive every day. I believe this sort of distraction is even more likely when AP is activated, since as long as the driver wants to stay in their current lane, they are reduced to a passive standby role, and need not actively steer or control their speed. It's hard to maintain concentration when in a standby mode, and really hard to leap into action from being in standby. Plus, many AP users seem to think that AP makes it safer to be distracted or to engage in distractions that they might otherwise avoid while driving.

Here, the problem is especially bad because AP put the car into the improper location (the gore ). It is highly unlikely that someone who drove that stretch of road daily would have made that mistake if he were driving the car manually. Once a human driver has driven on a stretch of road a few times, they tend to subconsciously "know" the layout of the road and won't inadvertently confuse a lane with something that is not a lane. AP, however, seems to lack this ability to learn or understand road geometry (and doesn't seem to try to simulate this understanding by use of map data). Instead it largely relies on following whatever part of the lane line is immediately ahead of and left of the driver side front bumper or immediately ahead of and right of the passenger side bumper. That's a very different way of interpreting the road from what is used by a driver on a road he or she knows well. And I suspect it is a much less effective way of interpreting the road.
 
Consumer Reports has put out their reaction to the NTSB Preliminary Report: What We've Learned From Tesla and other Self-Driving System Crashes

A key point made (which I strongly agree with):

"Tesla has never provided detailed data to the public demonstrating the conditions under which Autopilot can safely operate, says David Friedman, director of cars and product policy and analysis for Consumers Union, the advocacy division of Consumer Reports. 'Tesla’s driver-assist system is allowed to operate under situations where it cannot reliably sense, verify and react to its surroundings in a safe manner,' Friedman said. 'Tesla hasn’t made sufficient changes to its system to address the NTSB’s concerns [from the Florida Crash Report].'"
 
  • Informative
Reactions: Matias and TEG
I cant believe there is no first person eye witness account. Is everyone so into their phone now that they cant see a big car crash

"It all happened so fast..." Neighboring drivers may not have had much chance to see the brief sequence of events that caused the mayhem.
Reports are like "I heard a boom and then see all this debris all over".

The damaged Audi & Mazda cars stopped at the scene. So the drivers should have made statements.
Maybe those cars were behind the Tesla and may have been able to describe what they saw happen in front of them before they hit it. Maybe their comments will get noted in the final NTSB report.

My guess would be there isn't much to say other than "The Tesla was driving down the gore area and didn't correct back into the lane in time."
 
Last edited:
Real cause of accident:

The driver of the leading car made a normal driver error (accidentally crossing into the gore area).
The Tesla, using its lemming-like logic, mimicked the lead car's error.
The lead car's driver corrected his/her mistake, and pulled back into the fast lane.
The Tesla, interpreted that move by the lead car as a lane change, and therefore didn't mimic it; instead deciding to interpret the inside of the gore as if it were a lane.
Since AP basically ignores objects (like concrete barriers) that AP has never seen move, AP determines that there is no traffic ahead of it in its "lane" (the gore). Therefore the Tesla speeds up to try to reach its programmed maximum speed.
The Tesla crashes into the fixed barrier at the end of the gore.

This is pretty clearly a case where AP put the car into a dangerous situation because it got confused.

The problem here is that AP is basically working with two strategies: (i) follow the car in front of me and (ii) align with a lane line. It seems to make very little (if any) use of map data for guessing the location of lane lines and road geometry and therefore basically uses the camera to decode the lane line location. It likely frequently looses its understanding of the lane line in cases where the line is damaged or the road is confusing, especially when there is a lead car that obstructs its view of the lane lines more than a few yards ahead of the Tesla. Therefore, it seems to use the follow the leader strategy a lot. This works, unless the leader has made a mistake. And, of course, drivers make mistakes (and then correct them) frequently.

It amazes me how a lot of people on this board spend a huge amount of time critizing the driving skills of everyone else on the road, yet are happy using a driver's aid that frequently "drives" by mimicking the leading driver (and therefore copying that driver's skills).

Good and accurate points. Human drivers often use the follow-the-leader strategy too, especially on unfamiliar roads. What cues would a typical human driver have picked up on to cue them to danger in this situation? The assumption is that the driver would have visually seen the crash barrier ahead, or if not, would have at least seen the pattern of lanes splitting off and deduced that the present "lane" was not really a lane. Tesla's current system seems to have far too much tunnel vision when it comes to lane identification.

In a classically programmed system, the running code would assign a continuously-updated probability of evasive action being required, taking into account the likelihood and consequences of false-positives and false-negatives. It would have taken evasive action when the cost/benefit of doing so became greater than the cost/benefit of not doing so. I'm not sure whether Tesla's code architecture has enough transparency for their engineers to even see what the corresponding probabilities were. Was it just slightly below the threshold for triggering emergency braking / evasive action? Or did the car see absolutely nothing amiss until the collision? In either case, it's not obvious whether the current AP hardware is sufficient for even the best AI built on top of it to reliably handle this situation, or a similar one in the future.

A good definition of human intelligence is that it is the ability to solve new problems. This can involve using all sorts of contextual clues, logical deduction, pattern recognition and intuition. The visible structure of the freeway interchange up ahead should provide a clue as to the expected current lane structure, and trigger a "WTF?" mental warning that the current path may be non-viable. At least this would cause a human driver to start slowing down, pay very close attention, and give much more plausibility to the idea that the current path might be dangerous.

Similarly, the mere fact that the leading car merges out of the "lane" suggests that its driver may have done so for a good reason. If proper evasive/corrective action would be to do X, and the driver ahead of me has just done X, then that information should be taken into account in my decision-making. Humans tend to do this intuitively, but it's proven very difficult (so far) to encode this kind of common sense into software.
 
Consumer Reports has put out their reaction to the NTSB Preliminary Report: What We've Learned From Tesla and other Self-Driving System Crashes

A key point made (which I strongly agree with):

"Tesla has never provided detailed data to the public demonstrating the conditions under which Autopilot can safely operate, says David Friedman, director of cars and product policy and analysis for Consumers Union, the advocacy division of Consumer Reports. 'Tesla’s driver-assist system is allowed to operate under situations where it cannot reliably sense, verify and react to its surroundings in a safe manner,' Friedman said. 'Tesla hasn’t made sufficient changes to its system to address the NTSB’s concerns [from the Florida Crash Report].'"

I understand where you are coming from with your previous post, but it and this CR quote both ascribe (in my opionion) the wrong philosophy/ approach to viewing AP.

AP is not a 'drive for you' system. AP is a better version of cruise control.


Until the past few years, no version of cruise control paid any attention to lane markings or other cars. AP/ lane assist/ TACC does. Because AP can interpret some versions of road markings and some versions of vehicles/obstacles, it is safer than classical cruise control.
It is not meant to replace the driver (even though in some cases it will, for a time). The Ford lane assist/ ACC system can also make you think it is a replacement, but it is not. And it lacks any warnings when activating.

That's why I disagree with the AP put the car into a dangerous situation statement. For most other vehicles, not paying attention/ steering for 6 seconds will put you into another lane or a stopped object 500 feet ahead.
The driver is in command, AP does not replace the driver, AP only reduces the amount of steering and accelerator/ brake inputs needed. The driver put themselves in the dangerous situation.

To the CR article:
AP is not intrinsically safe, so to require a report on which situations it fails is incorrect. It is an assist feature, and could be described by the % chance of handling things classes cruise control does not. However, the % are not to the point when a driver can ignore their responsibility. Nor could a driver know what situations lie ahead to even apply the percentages.

Classic cruise only handles one situation: surround traffic going faster and driver steering. It has zero 'driver paying attention' features. Yet there is no outcry regarding it.
 
Last edited:
  • Informative
Reactions: Ugliest1
I cant believe there is no first person eye witness account. Is everyone so into their phone now that they cant see a big car crash

Redwood City CHP from day of accident.
CHP Redwood City
CHP Redwood City
@CHP_RedwoodCity
·
Mar 23
The Tesla was then hit by a Mazda as it landed on the #2 lane of US-101 and then hit by an Audi on the #1 lane. Total 3 vehicles involved
CHP Redwood City
CHP Redwood City
@CHP_RedwoodCity
·
Mar 23
Update on collision on US-101southbound at SR-85
Blue Tesla driving southbound on US-10, driving at freeway speeds on the gore point dividing the SR-85 carpool flyover and the carpool lane on US-101 southbound collided with the attenuator barrier and caught fire
 
The driver only had something like five seconds to react.

His failure to react might have been because he was using the system in an abusive manner (ie texting or otherwise playing with his phone or with food while driving).

But the fact that he did have his hands on the wheel for most (and possibly as much as all) of the minute leading up to the crash suggests to me that it is more likely that his attention lapsed for the reasons why all drivers' attention lapses from time to time: day dreaming, playing with the audio or climate controls, getting distracted by a billboard, getting distracted by something some other driver is doing, etc.
So true. Everyone automatically assumes abuse. The 4 second distraction could have been a responsible one like checking his side and rearview mirrors... Like every driver on the road does constantly. Someone looks down for 3 seconds to pick up their coffee, change the station, adjust their model 3 wipers... Crunch!

How many times do we see AEB commercials where the driver looks away for a split second, at that moment the car ahead stops or a deer runs out and screeeech AEB saves the day.
 
Last edited:
Real cause of accident:

The driver of the leading car made a normal driver error (accidentally crossing into the gore area).
The Tesla, using its lemming-like logic, mimicked the lead car's error.
).

As many people have already recreated this without car following, it is highly doubtful that car following was a factor here, as I've only see car following happen (lead car turns blue) in stop and and go traffic.

>It amazes me how a lot of people on this board spend a huge amount of time critizing the driving skills of everyone else on the road, yet are happy using a driver's aid that frequently "drives" by mimicking the leading driver (and therefore copying that driver's skills).

This just isn't how AP works at highway speeds.

And yes, in this case, the driver's skills driving the actual car that crashed, is the thing that caused the crash. He wasn't paying attention.
 
All your points above are everyday reality and the technology has to factor them in. If it doesn’t then it should not available on public roads.

Ok, so cruise control should not be available. It will happily drive you into anything.
GPS does not account for road closings or constructions, we should take that away too..
Motorcycles don't have air bags, get them off the road.

All of these things is just technology, no different from a steering wheel. Just part of the machine.

All my points are everyday reality and the driver has the factor them in, regardless of what the machine is doing.
And the driver doesn't, they shouldn't be driving on public roads.
 
  • Like
Reactions: mongo
This update may help, although Elon seems to overpromise on FSD ... :cool:

Tesla’s version 9 software update is coming in August with first ‘full self-driving features’, says Elon Musk

Tesla’s next major software update ‘version 9.0’ is now set for a release in August and it will include the first ‘full self-driving features’ for Autopilot 2.0 vehicles, says CEO Elon Musk . Version 8.0 came back in 2016. It was Tesla’s most significant over-the-air software update at the time. It featured a user interface refresh, a few new features, and several improvements related to Autopilot.


Almost two years later, it looks like Version 9.0 is also going to bring a similar important update to Tesla’s software. Musk announced on Twitter last night that the version 9.0 is coming in August and that it would even include the first “full self-driving features”:

upload_2018-6-10_9-54-40-png.308411
 
As many people have already recreated this without car following, it is highly doubtful that car following was a factor here, as I've only see car following happen (lead car turns blue) in stop and and go traffic...

Both Tesla and the NTSB talked about the close following.
An Update on Last Week’s Accident
...Autopilot was engaged with the adaptive cruise control follow-distance set to minimum....

Also, we are reminded that very many other Tesla vehicles travel that same stretch of road with autosteer and don't go into the gore area.
So, something was different that time.

From the NTSB prelim report:
At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.
At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.
At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.

Even if the "autosteer" wasn't trying to follow the lead car into the gore area, perhaps that lead car was so close in front that it was blocking the view of the gore area markings so the autopilot didn't see enough to know it was drifting into the gore area?
 
Last edited:
Both Tesla and the NTSB talked about the close following.
An Update on Last Week’s Accident


Also, we are reminded that very many other Tesla vehicles travel that same stretch of road with autosteer and don't go into the gore area.
So, something was different that time.

From the NTSB prelim report:


Even if the "autosteer" wasn't trying to follow the lead car into the gore area, perhaps that lead car was so close in front that it was blocking the view of the gore area markings so the autopilot didn't see enough to know it was drifting into the gore area?
The closer you are to the vehicle ahead of you, the more that vehicle obstructs the view of the forward cameras, especially since those cameras are located in the center of the windshield. Since AP is completely reliant upon seeing the lines, it needs to determine which line it ultimately wants to follow with a limited view. At a spot where two lines diverge, such as the beginning of the gore point, AP has to decide which line it's supposed to follow. The fainter line could be one that was an old line, obscured intentionally because of a lane shift and I think AP relies somewhat upon the lead vehicle to make that determination.

Regardless of the reason AP ended up entering the gore point, it's pretty obvious from the pictures and the acceleration prior to impact that it thought it was a lane once it was in there.

In stop and go traffic, AP regularly loses sight of one of the lane markings as it gets obscured by the lead vehicle.

I don't think that a follow distance of 1 is too close for the car to stop under normal braking conditions, but there's no question that it results in shorter forward visibility for the AP vision system and rapid braking when the lead car does slow.

Personally, I don't think there's any question that AP, through limitations in it's capabilities put the vehicle into that margin and accelerated into the barrier. Where I draw the line, is blaming the accident on AP. AP, like a human driver, is going to make mistakes and it's the responsibility of the human to correct them. Unfortunately, there are always going to be cases where both the human and AP makes a mistake and the result is going to be an accident. What matters, is that human + AP result in fewer accidents than humans alone. Not until FSD, can we expect the vehicle to exceed the capabilities for a human on its own.
 
So this accident was referred to as a "perfect storm" of things going wrong... This included:

#1: Low morning sun ahead making it hard to see for driver and cameras.
#2: Following distance=1 is perhaps too close for this kind of traffic pattern / road conditions.
#3: "Lead car" apparently improperly enters gore area possibly guiding auto-steer into the same area.
#4: Poor lane markings including lack of warnings in the gore area.
#5: Wide / long gore area looks very much like a lane.
#6: It appears that the driver wasn't paying close enough attention to the roadway to realize the error and failed to manually correct it.
#7: Automatic Emergency Braking doesn't stop for objects like that.
#8: Gore point smart cushion was damaged and not repaired, and did not do normal energy absorption on impact.
#9: This particular type of crash cushion seems particularly bad to impact when it was not repaired. It is more like a knife edge in that case.
#10: It appears that the Tesla hit it at "just the wrong place" so it sliced between the Tesla's energy absorbing structures.
#11: It sliced through just enough to crunch the edge of the battery pack leading to a fire in the pack.
 
  • Like
Reactions: MorrisonHiker
Life or Death decision point with attentiveness and quick reaction required.

View attachment 308242

( Note, your chances of living on the left path are much greater if the crash cushion had been reset for you... )

Strong disagree on on life/death being lane position with AP engaged.

Life = Your eyes were on the road
Death = Your eyes were off the road

Did you SEE how much time J.R. Smith had to dribble the ball around before passing for a jacked up shot in Game 1 of Warriors-Cavs?

Walter had EVEN MORE TIME THAN THAT to react to the barrier coming up on him.
 
  • Love
Reactions: NeverFollow
The idiot cushion can save your life. Or not. It depends. It could even kill an innocent party as you spin into other cars. 6000lb @ 71mph can do all sorts of things. In this case, it flipped the car 360°, IIRC.

What an idiot cushion cannot do is crash. It's a static object, like a tree. While the death can be related to the cushion, it's unrelated to the impact. No impact, no crash. No crash, no death.

Why on earth would anyone blame the crash on a static object easy to see in daylight hours that you've seen before.

And APx has seen it before, so ignore any 'learning' rhetoric or high res GPS or enhanced camera technology. It's radar+camera+GPS is capable of getting location information off it and a relative velocity and distance. It programmed to ignore that information. Or it would have seen it as a threat which it did not.

People keep mentioning AP in this thread. It was specifically the AEB at fault for the impact and lack of aggressive driver warning systems like seat vibrators or intense LED displays and intrusive sound klaxons.

The steering will always point the car in the direction of threats. The AEB/TACC is what stops the collision.

It has been known for years that the AEB was not the Tesla's strong suit.

I love you. Next time I see one of those on the road - I'm going to point it out to my son and go - "Son SEE THAT, there's an idiot cushion".

It's going to open up a huge dialog on the purpose of that and how you want to avoid hitting it, what happens if its missing and the importance of attentiveness, maintaining your vehicle, etc.

I'm honest in the things I know, and I'm honest in the things I don't know - which is AEB systems.

My whole thing is if Tesla does a great job of breaking/maintaining distance with TACC - isn't that the same logic that powers AEB? Too close to a vehicle. BRAKE HARD?

I did have something interesting happen which I never saw before as I leave lots of margin when utilizing EAP. I set auto lane change.. Half way through the lane change, Model X changed its mind and decided to go back to the original lane. I'm going WTF?

Motorcycle entered the lane I was going into at 100MPH and I didn't see it. I would have been fine anyway if I was driving manually but autosteer is more conservative than I am. Thought that was interesting. Autosteer/side collision/motorcyclist rear ending model x at 100mph collision protection feature is working on new firmware.