Real cause of accident:
The driver of the leading car made a normal driver error (accidentally crossing into the gore area).
The Tesla, using its lemming-like logic, mimicked the lead car's error.
The lead car's driver corrected his/her mistake, and pulled back into the fast lane.
The Tesla, interpreted that move by the lead car as a lane change, and therefore didn't mimic it; instead deciding to interpret the inside of the gore as if it were a lane.
Since AP basically ignores objects (like concrete barriers) that AP has never seen move, AP determines that there is no traffic ahead of it in its "lane" (the gore). Therefore the Tesla speeds up to try to reach its programmed maximum speed.
The Tesla crashes into the fixed barrier at the end of the gore.
This is pretty clearly a case where AP put the car into a dangerous situation because it got confused.
The problem here is that AP is basically working with two strategies: (i) follow the car in front of me and (ii) align with a lane line. It seems to make very little (if any) use of map data for guessing the location of lane lines and road geometry and therefore basically uses the camera to decode the lane line location. It likely frequently looses its understanding of the lane line in cases where the line is damaged or the road is confusing, especially when there is a lead car that obstructs its view of the lane lines more than a few yards ahead of the Tesla. Therefore, it seems to use the follow the leader strategy a lot. This works, unless the leader has made a mistake. And, of course, drivers make mistakes (and then correct them) frequently.
It amazes me how a lot of people on this board spend a huge amount of time critizing the driving skills of everyone else on the road, yet are happy using a driver's aid that frequently "drives" by mimicking the leading driver (and therefore copying that driver's skills).