As Green describes, Tesla collects data about the types of events that happen to interest them at that moment.
Right, external events based on what the car sees outside of it...like "every time you think you see a speed limit sign- send me a picture of it" so they can take those pictures, label them (which is still done by hand, by humans, today) and then use those labeled pictures to train the NN to better recognize speed limit signs.
It's not doing anything like "comparing what the driver did with what the NN thinks it should do"- there's nothing
at all like that happening.
Once FSD is feature complete and good enough, the interesting events will likely include discrepancies between the FSD and the driver or between two versions of FSD.
There won't be 2 versions of FSD running. The
entire point of the second system is having immediate, redundant, fail-over if one has a problem.
The cars only collect data to feed back to HQ, they do not individually or independently "learn" anything (it'd be kind of nightmarish to diagnose between cars if they did)
Beyond that, I mean, we can speculate about it someday doing things vastly different from what it does now- but none of that is remotely similar to what it does now.
Add on the fact the current code is largely going into the trash can by end of year for an entirely re-written system and the speculation gets even further from anything based on current information.
Hopefully folks like Green will still have enough visibility into what's really happening to clarify these things.