Unless 8.3 is much, much better than 8.2, this is not ready for general public release. I really hope Elon isn't seriously going to give just anyone The Button with the software in still such an early state. It needs a lot of iteration before it can actually try and tackle major cities.
I don't understand the logic you're using here. It's a driver assistance feature which means it requires 100% driver attention. This is no different from driving a car without any driver assistance features. If anything, it get's more dangerous in irresponsible hands as it becomes more reliable (until it exceeds the safety of a human).
From Tesla's perspective, the safety of the system would have to be looked at in the aggregate. I'm sure they have already developed conceptual models to to help estimate the expected overall safety as the system improves. There is a point of maximum danger where the system is good enough that
some users will start to trust it more than it deserves. For arguments sake, let's look at a single point in the development when FSD is 5 times as likely to have an accident as an average human driver (if it were unsupervised). Let's also assume that, when it's this good, 10% of all users will trust it explicitly and never monitor it in an effective manner and 50% will be monitoring it "pretty well" and 40% not trusting it all and treating it like they are driving it and always ready to take over. Yes, I know these numbers are not based on any data and we can't assume they will be reality but they are just to illustrate a point.
This would mean the 10% that don't monitor it at all will have an accident rate 5 times human drivers. The 40% of users who monitor it at all times, as if they were manually driving the vehicle, should have an accident rate lower than the average human driver (because FSD will notice and react to some accidents that would have happened without it). And the middle 50% that monitor it "pretty well" might have an accident rate about the same as the average human driver. The net result of all this
would be an accident rate higher than the average human driver and the
entire increase in the accident rate could be attributed to those who monitored it so poorly they hardly ever took over.
So, it's really about preventing the stupidest people from using it in an unsupervised manner. As much as I would like Darwin's "survival of the fittest" to take care of things naturally, the resulting innocent carnage would be unacceptable. The takeaway here is that once the system starts getting to be so good that there is a growing body of users that would abuse the system, Tesla must limit access to the best versions to prevent the overall safety rate from dropping below that of the average human driver. They should really shoot for keeping the average mile travelled under FSD twice as safe as the average human driver (without driver aids) while in beta.
It only takes one inattentive beta tester driving straight into a wall to completely ruin everything they have accomplished so far, and also end my chances of ever retiring instead of working until I die when it crashes the stock price.
I don't see it that way. As long as it's a drivers assistance aid, the driver is responsible for not driving into a wall. Humans drive into walls
with alarming frequency. They don't require FSD to help them do it. My in-law drove through the window of a mattress store because she thought she was pressing on the brake instead of the accelerator. FSD should greatly reduce this kind of accident. Sure, the media will make a big noise about it but, in the end, it will come down to whether they system is making life safer or more dangerous overall. Insurance companies will see to it that this is the metric used.