I accept fully that the goal is worth striving for. I think I said that, but we can still debate the way we get there. I think its patronising to think the objections to what they're trying to do is because some are tied to humans are better and FSD is not worth obtaining. I don't think thats the complaint at all.
There's a couple of key points:
- The data Tesla shares is meant to indicate positive progress, but its actually not doing so when you look at the figures.
- Incremental steps forward and the willingness to reverse decisions where they transpire to be wrong has to be more important for society that blindly pushing forward to protect current share price until the company can withstand the shock if it fails if thats what may be happening.
- Society as a norm works on the basis that it tolerates people killing themselves through there own actions with laws and taxes meant to encourage better behaviours. We have speed limits with a slap on the wrist but we don't ban the car, we have tax on cigarettes, we don't outright ban them etc.
- Nowhere in peacetime society do we sanction corporate responsibility that can result in death on a large scale with the exception of medicine (and thats balancing two unpleasant outcomes) and sport (things like boxing and motorsport, and not without repercussions). So sanctioning a self driving capability that may be 2, 3, 4 times better than the average driver still results in massive global death toll. Flying is incredibly safe per mile travelled yet a plane accident gets extensively investigated and Boeing have had a fleet of their planes grounded for some time because of a couple of accidents. Per mile they're still way safer than car travel with human drivers. The point is simply its going to create more than a statistical comparison of FSD v drivers to determine if the risk is worth taking, its a change of underlying culture in the risk taking that governments are prepared to take. That may be over due but that goes way beyond self driving, this goes to the values of society and the expendability of some lives for the greater good. If we're happy for 300 a year to die at the hands of a computer program, not because of any action or inaction that person took, just wrong place at the wrong time, then where do you stop?
But lets say they can reach the 9 9's safety threshold, then sure, I'd be happier with self driving cars around me than some we come across, but that doesn't excuse the journey to get to that level as if we were in some state of war or a pandemic. Even hospitals are getting rapped at the moment for DNRs of dying patients on overflowing covid wards during a pandemic and trying to hold them to account.
I've seen nothing scientific about the Tesla approach, the argument for a vision based system seems to be "we drive with 2 eyes" which is not what I'd call a well argued point. So it is very reasonable to ask if the Tesla approach is appropriate and commensurate with the aspirations and time pressures or is it simply they are throwing people into risky situations in an attempt to be the first to fulfil the goal