Who needs to be explained what certain parts of the neural network are contributing to driving decisions? Typically it's needed for debugging code to make specific changes, but if you can fix individual issues with more end-to-end training of examples of the desired behavior, is the human explainability piece necessary as opposed to evaluating whether the behavior changed without regressing others?It's harder than traditional engineering in many ways such as lack of explainability and super hard to "patch" individual issues and to validate the impact of a change in the training set
Perhaps the neural network structure of current end-to-end will be insufficient to learn enough corner cases and that might require increasing the size of the network and/or changing the architecture, and these could exceed the hardware compute capabilities of HW3 to achieve some safety level. So Tesla might need to do some hybrid solution to get working on existing vehicles, but end-to-end could still be the correct approach with newer hardware.