@Knightshade has argued that using the extra data for the target city will cause too many regressions that would keep FSD from working well everywhere else.
No, I have not.
I see moving this to the proper section hasn't changed your habit of ignoring the actual posts you're replying to and instead building strawmen to knock down.
Below are the relevant things the people in the discussion
actually said
once a behavior is "learned" by the AI/NNs for a given geo-fenced area, we don't know the impact of unlearning it or re-learning it when more areas w/ diff rules are added.
In reply you posted this-
That's not the way it works. Learning is cumulative. Just because FSD is better in San Francisco does not make it worse somewhere else.
FSD V12 needs more training in rainy areas because it needs more training in rainy areas. Once that happens, V12 will get better at driving in the rain. But that won't make it worse in San Francisco.
If you oversample in a geofenced area, the FSD will get really good at driving in that area. But that won't make it worse anywhere else.
This misunderstands a fair bit about how learning
actually works and I corrected you by pointing out:
It might if people drive differently there.
This is one of the reasons driving is a really hard problem.
AI could get great at Go or Chess because every time you sit down the rules are the same. The knight always moves the same way, on every chess board, in every country.
Driving is much more locally different than that. The simplest example is it's legal to turn right on red some places and not others... heck it's legal to turn LEFT on red some places and not others. And those rules can vary not just by state, but even by city, or PART of a city. And there's often not signs making this super clear.
Without hardcoding that's a tough thing to "solve" for a general driving solution. And there's lots of other examples more subtle (differences in road markings, types of intersections, restricted lanes, signage, etc).
To be clear I don't think it's an unsolvable problem, but I DO think it means overfitting to one area CAN make it worse in others.
You then kept going down the rabbit hole ignoring everyone pointing out you seemed to have a poor understanding of the system and how NN training worked, and increasingly making up things nobody said and then insisting those imaginary arguments were wrong.
To the point you accused me of claiming end to end could never work, despite the fact I never said anything
remote like that, and as you can see from my first post explicitly said the issue I raised was solvable--- I was only correcting your mistaken claim the issue did not exist....and telling me I should sell all my stock since I didn't think FSD could be solved with NNs (which, again, I said
the opposite of)
I believe that Tesla engineers would be able to curate the data properly so regression would be minimal
I guess this is progress from your original claim, quoted above, that there would not
be any regression because "that's not how learning works" or something.
But it's still wrong.
Regressions elsewhere are
more likely to occur as you overtrain in a specific area, and especially when your ENTIRE system is NNs which are much harder to predict (or, without local testing
everywhere even FIND) regressions.
Without explicit rules and code it gets
harder not
easier for the system to "learn" the right behavior across a wide array of locations that all have different rules. Especially if a vastly overfitted amount of your data comes from one specific location.
Again none of this is insoluble (geotagged training data, if the NNs are intended to consider that as an input, for example will help this, but you'll need data from a lot of DIFFERENT places, not a lot of it from one place.... you'll likewise need it to consider time/date, holidays, and other factors that all change rules of driving for humans but aren't obvious from a 30 second clip of video)
But your entire theory here seems to be "Tesla will just quickly make per-city perfectly safe NNs and everything will be great" based on.... I'm honestly not even sure what.
and they would quickly fix any actual regressions that come along. I don't see it as being much different from the way they create and test any other new version.
The training methods for V12 are
entirely different from every wide release version ever
So if you don't see it as much different from "any other" new version it again underlines how poorly you understand everything behind the scenes and how it all works.