@xav- I've tried to make so many of the same points and been told a million ways I'm wrong from thousands of people that very likely aren't working in software, hardware, or vision system development fields. The rain sensor is such a classic example of overcomplicating something that had already been so widely developed. It was such a mess that it was even a point during a talk about Tesla's NN development.
I'm a proponent of lidar as well as vision and radar, but that war seems to have been lost within Tesla. The value of higher resolution, active emission sensors can not be understated. Adding that value to the radar and camera systems makes the entire suite so much more robust in the common case, and leaves headroom in the worst case. But again, this war seems to have been lost long ago.
The purpose built hardware, though, does solve for several cases Tesla is no doubt running into with their vision systems. Attempting to use common GPUs as neural nets is sub optimal to say the least, so building devices that are optimized for that purpose will allow Tesla to process more scene data more quickly. Detecting objects sooner with higher reliability and tracking them with higher granularity can help in corner cases where right now the algorithm isn't complex enough or processing enough data to get necessary detail. Basically, if you can use all cameras at full resolution, you get a big benefit. But current hardware is probably barely capable of that. Now add on text recognition and you're likely beyond HW2.5's capability. I could totally be overestimating the requirements of the NN here, but I doubt it.
And that's actually an argument against the "humans can drive with their eyes" point people try to make. Humans have very complex chains of thought that include creating future scenarios. So you know to drive slowly around a corner in a neighborhood because kids could be playing. A machine learning algorithm need to be trained for nearly every scenario it could find itself in, which is basically impossible to do before the system is released to the public. And even then, well after Tesla stops using shadow mode to learn and reinforce learning data, there are going to be scenarios that the system could have had no way to be prepared for. This is why humans are so vastly superior to machines (for now) and even other animals. Robots are good at not being tired, or risky, or getting road rage though. So again, in the common case they're likely already better drivers than us. But common case operation doesn't kill tens of thousands of people a year.