Even cars without FSD can be used to collect edge case data so I don't think the decision to do wide release would be driven by the need for more data.
The "march of nines" simply refers to the perfecting FSD so it only screws up once in a great while. And, yes, that is generally thought of as solving edge cases. My concern is that these edge cases are so varied that the current system doesn't have the resources to accommodate them all. Because there are a lot of them. If I'm right, and I hope I'm not, FSD (with current hardware and techniques) will continue to improve towards a plateau that is just below what is necessary for it to reliably drive without supervision.
Previously I thought as long as the system was statistically safer than the average human driver, say twice as safe, then it would be criminal to not approve and implement it. But I hadn't really considered there was a third option, allowing it be used only with human supervision (as is currently the case for those with FSD). If FSD alone is twice as safe as a human driver alone, then FSD with human oversight might be 3X-4X safer than a human alone (and 1.5X-2X safer than FSD alone). That is a powerful incentive to not approve it for fully autonomous operation (because we now have an even safer option). I had failed to consider this conundrum previously.
To explain this more fully, I've long held that FSD will still have accidents, but they will be fewer, and on-point here, they will be different types of accidents than humans have (most of the time). That is to say, FSD will prevent most accidents human have because those accidents are concentrated around not paying attention at key moments. But FSD will fail in odd situations, maybe due to a "glitch" in the system (for lack of a better word) that will cause an accident in a situation that would be unlikely with a human. Thus, human oversight can make the system safer yet. And I think auto accidents are common enough and the results of accidents are serious enough, that it makes sense to keep the system as safe as possible before the requirement of human oversight is lifted. In other words, I don't think the requirement of human oversight can be lifted until human oversight cannot prevent a significant number of accidents from happening. This is a big change compared to how I previously viewed the ethics involved. Which was, as long as it was definitely safer than a human alone, it would be criminal to not approve it.
Tesla's own FSD system with human oversight is competing directly for safety with Tesla's FSD system without human oversight. And I think that might explain why Tesla raised the price of FSD so high before it has that high of a value. This positions FSD as something that will not be used by a large percent of automobiles unless it can be used fully autonomously which bolsters the argument of approving it sooner with no human oversight because it's safer than the current status of mostly 100% human only drivers.
On the other hand, if unsupervised FSD can be 10X as safe as a human driver alone, then these arguments largely dissolve (because the accident rate becomes so low that it's no longer logical to require supervision.).
The significance of this conundrum is lessened if the "march of nines" happens quickly via exponential improvements in FSD, and this significance increases if improvements flatten out. However, my impression is that experts in the field like Musk and Karpathy have always expected improvements to flatten as the system approaches autonomy, hence the "march of nines". Regardless of how it plays out, it's exciting to watch, and I do not lack lottery tickets in case things happen sooner than I expect!