Yes. In many situations, FSD corrects itself after making an initially wrong lane choice. It seems totally obvious that it should notice and remember the self-correction and send it back to the cloud. If the cloud gets multiple consistent reports it should automatically integrate the change into enhanced maps and send out an update report to cars in that region after some period of time.I do wish Tesla would incorporate anonymized location-based collective driving experience into its AI model.
.2 is smoother, but even today it tried to get into the left lane about 150 feet before a right turn - in traffic, and for no reason. There was no car to pass, traffic was just breezing along.
Clearly vision needs to be the primary decision maker as road conductions can change. But somewhere knowing that 100% of people are in the right lane before making a right turn needs to influence its decision making model. This wasn’t a complex situation with a special turning lane. Just a 4 lane road, upcoming right turn.
I did use the camera report.
The current visual perception should remain the dominant factor in real time but the updated enhanced mapping should be used to bias the initial perception, especially when the perception is unsure and not highly confident. This is, after all, how people actually drive as others have pointed out. In this limited way, FSD could then learn from its own driving experience even though the NN itself does not learn in the vehicle.