While the rest of your post doesn't seem to make a lot of sense from a technical perspective (the short version is that the behaviors described don't have any path towards being able to be related to the causes you ascribe), the quoted part above is unfortunately completely true.
As for the actual reasons why these issues occur, the phantom braking thing, to me, seems like a standard garbage in/garbage out issue. I mean, you can tell how glitchy the actual data is just by sitting still and looking at all of the floating and glitching vehicles. If at some point the system gets a glitching input that shows something is dead ahead, even for a moment, the AEB part of the code (or the part that simply needs to slow quickly for TACC) will have to react, and it does. Pretty simple. The stability of the data coming out of AP2 is far worse than AP1 from a completely objective standpoint. As in, it is quantifiably worse. AP2 has more data but of low quality, AP1 has less data but higher quality. If AP1 thinks a car is ahead, there's a damn good chance there's a car ahead. AP2 on the other hand thinks my mailbox flowerbed area is a semi truck occasionally, so... not much faith there.
For the NoA issues, this just poor software. As far as I can tell, NoA is not machine learning. It's human written code using the input data available to make decisions. Those decisions are just poorly designed and implemented. I know this because I implemented my own "NoA" type setup on an AP1 car several years back. It didn't "navigate", but was made to auto-pass other vehicles and return to an appropriate lane with zero input. Since AP1 doesn't have much side information or rear information, I implemented this by having the car do a soft "ting ting" sound a couple seconds before it was going to initiate an auto lane change, at which point I as the human can look around and make sure it's safe, and if I do nothing to prevent it the car would change lanes by itself.
My implementation with AP1 took me a few weeks of off and on work to implement and tweak, and that's building on top of the existing system with my third party addon hardware, not even a direct implementation in the AP module itself. If I can do that with AP1, then I am baffled why Tesla can't implement something at least as usable on their own with full access to do so directly. That level of missing the mark is mind blowing to me.
The major difference is that when in motion, the data quality improves but from the cameras and the radar. The reason I suspect that one of the deeper networks is responsible for the main behavior is that we have seen this behavior persist since very early days on Hw2+ platforms.
So, to dive deeper into what I actually thing is going on, if you watch any of the video clips that Tesla or anyone else has produced with the debugging data enabled, take a look at the drivable space noted in the data. Even in Karpathy's latest tweet where he talks about the auto labeling nonsense, you clearly see the pink road coloring disappear from time to time. So. My hunch here is that deep down in their stack of networks, they are doing a very poor job of determining what is navigable space and what is not, the probability drops below a threshold and they brake.
The AEB stuff happening lately, that's almost certainly just their extremely naive and poor approach to depth vision given it happens much worse at night versus the day. I'm still waiting for someone to produce one of those voxel maps at night time, but so far nobody has obliged.
As for the crash hypothesis, we see symptoms of the AP system panicking somewhat frequently with very brief alerts. I've even had situations where the vehicle kept driving, didn't give me the take over immediately alert, but when I looked at the alerts triangle on screen it had the warning in there. The only symptom I experienced was that it performed poorly on a curve on the highway.
The stability of data comment you made is exactly what I'm talking about. The lowest level networks are outputting garbage, which the rest of the system interprets. If that data shows that the navigable path ahead ends abruptly, then obviously the vehicle is going to attempt to stop. Then the probability rises again, it meets the threshold to be considered navigable space, and it keeps going.
NoA is pure garbage and none of my comments related to NoA at all. That's just a steaming pile, and it has been since its release. They foisted that junk on us, and immediately started working on "smart" summon. Once that trash was released, they promised and update "in two weeks", which predictably never came. Then, this year, Andrej showed the whole world what those of us with concerns were complaining about. It was never going to be possible for their approach to
ever produce a safe or usable smart summon system. The fact that there are reports of it working is the fluke, not that it doesn't work.