Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is Auto Pilot 1.0 fleet learning really a reality?

This site may earn commission on affiliate links.
Some posts indicate that the newly introduced Auto Pilot "1.0" is using crowd data to improve itself. I'd admit that a lot of articles discussing this topic addresss the same posts at teslamotorsclub and repeating the story doesn't make it more probable. I'm sure that the owners reporting the seemingly self learning behaviour feel that their car is "definitely" being better at driving itself - but can it really be the case? I've been working within the AI field and I'd like to briefly go through some of the facts in the case of the Auto Pilot and similar systems:

Fundamentally, learning requires a feed back mechanism as to know whether a given behaviour should be changed. In other words, just driving around doesn't learn the car a lot. In the case of the Tesla Autopilot the most sure thing for corrective action is when the Autopilot demands the user to take over. In this case, the moments before and after contains crucial data for learning via the sensor suite. Among the questions to be asked are: What happened before the incident? For instance, did the car make a manouvre thanks to the Auto Pilot that was not feasible? What was the conditions on the road, quality of signs etc? What did the driver do to correct the error? For instance, braking, speeding, lane shifting, turning etc? What was the outcome moments after the correction? An accident, continued driving etc? In addition to this, camera and sensor data along with speed and GPS data and not to forget local driving regulations are essential to understand what really happened and what was the expected and wanted behavior.

The above feedback data must then be send to Tesla and organised in order to be able to indentify similar situations from other drivers from the data pool. This is not a tedious task as the individual situations must be excatly recognized and categorised. Now, the data must be analyzed. Essentially, this requires manpower but could in some cases be done automatically. The analysis can have several outcomes, lets go though the most abvious: 1) The situation can not be clearly understood maybe because of lack of sufficient sensor data 2) The situation is too difficult to handle with the existing sensor/programming ambition 3) The situation can be handled (better) with updated programming or data input.

As for the latter, new or updated software is neccessary to correct the behaviour of the Auto Pliot - not just "sharing" a lot of data amongst the Autopilot enabled Teslas around the world. To be useful the changes must of course be uploaded to the individual cars and can be done on the regular software update basis with the users accept. And this is important, as the user then will be informed of important changes of the Auto Pilot. IF Tesla just updated the cars (even with minor parameter adjustments) without informing the owners properly nobody would be sure how the Autopilot would react to any situations on a day by day basis - and that could be a dangerous road to drive so to speak.

In summary, people using the Autopilot and sharing the data with Tesla gives the company an invaluable source for making the Auto Pilot still better within the limitations of sensors and computing power. However, this can only be done by Tesla after excessing processing of the recieved driving data -- it is not just happening by sharing data among the cars themselves. In addition, any changes to the car's (software) behaviour should to the best of my knowledge only be done in connection with the regularly announced software updates with the driver's conscent.
 
Some posts indicate that the newly introduced Auto Pilot "1.0" is using crowd data to improve itself. I'd admit that a lot of articles discussing this topic addresss the same posts at teslamotorsclub and repeating the story doesn't make it more probable. I'm sure that the owners reporting the seemingly self learning behaviour feel that their car is "definitely" being better at driving itself - but can it really be the case? I've been working within the AI field and I'd like to briefly go through some of the facts in the case of the Auto Pilot and similar systems:

Fundamentally, learning requires a feed back mechanism as to know whether a given behaviour should be changed. In other words, just driving around doesn't learn the car a lot. In the case of the Tesla Autopilot the most sure thing for corrective action is when the Autopilot demands the user to take over. In this case, the moments before and after contains crucial data for learning via the sensor suite. Among the questions to be asked are: What happened before the incident? For instance, did the car make a manouvre thanks to the Auto Pilot that was not feasible? What was the conditions on the road, quality of signs etc? What did the driver do to correct the error? For instance, braking, speeding, lane shifting, turning etc? What was the outcome moments after the correction? An accident, continued driving etc? In addition to this, camera and sensor data along with speed and GPS data and not to forget local driving regulations are essential to understand what really happened and what was the expected and wanted behavior.

The above feedback data must then be send to Tesla and organised in order to be able to indentify similar situations from other drivers from the data pool. This is not a tedious task as the individual situations must be excatly recognized and categorised. Now, the data must be analyzed. Essentially, this requires manpower but could in some cases be done automatically. The analysis can have several outcomes, lets go though the most abvious: 1) The situation can not be clearly understood maybe because of lack of sufficient sensor data 2) The situation is too difficult to handle with the existing sensor/programming ambition 3) The situation can be handled (better) with updated programming or data input.

As for the latter, new or updated software is neccessary to correct the behaviour of the Auto Pliot - not just "sharing" a lot of data amongst the Autopilot enabled Teslas around the world. To be useful the changes must of course be uploaded to the individual cars and can be done on the regular software update basis with the users accept. And this is important, as the user then will be informed of important changes of the Auto Pilot. IF Tesla just updated the cars (even with minor parameter adjustments) without informing the owners properly nobody would be sure how the Autopilot would react to any situations on a day by day basis - and that could be a dangerous road to drive so to speak.

In summary, people using the Autopilot and sharing the data with Tesla gives the company an invaluable source for making the Auto Pilot still better within the limitations of sensors and computing power. However, this can only be done by Tesla after excessing processing of the recieved driving data -- it is not just happening by sharing data among the cars themselves. In addition, any changes to the car's (software) behaviour should to the best of my knowledge only be done in connection with the regularly announced software updates with the driver's conscent.

Elon has stated on a conference call that the fleet is learning (iirc), and he retweeted at least one article about the fleet learning (iirc). It is learning by changing behavior in spots where people override autopilot.

I'm sure you know a lot more about AI than me, but it seems you're assuming Tesla can't do something it has indicated it is doing.
 
I also don't buy that the "fleet is learning". If there was you'll see the car taking updates constantly, which it doesn't.

I think at most there is some very basic individual learning that cars do by themselves, and then there is telemetry which gets sent to Tesla to be analyzed - probably by a human.
 
I know a little something about AI, and am not yet an owner (but heavily invested). But it seems to me that it doesn't have to be an either/or.

Sure I would expect Tesla to get feedback as to the conditions under which a human had to take over (which would lead to better future updates). However, its possible that the current version is geofencing a cache of override decisions to make your experience better. This is precisely the kind of data that Tesla needs.
 
Of course the fleet is learning! If you don't own a model S with autopilot, then you will not have the means to judge how the system works and how it is improving. I have AP in my P85D, and it is working even better now than it did end of October!
The car is not learning alone, it very much looks like a fleet learning to me. My car managed to navigate a complicated part of a highway without any problems just 3 weeks after I received the software update. When I drove on this part of the highway the first day I received the AP software, it asked me to take control of the wheel, 3 weeks later, at the same spot, and with the same weather conditions, absolutely no problem! The car handled this part of the highway perfectly, it even followed the temporary road works orange lines and ignored the normal white lines completely, as a human driver would do!
Musk said the following and I quote: "The whole Tesla fleet operates as a network, when one car learns something, the whole fleet learns something. the car should improve each week…you’ll probably notice difference after a week or a few weeks.
Tesla can push updates to the car without a full software update, they do this all the time with updates to the Google navigation maps by adding new supercharger locations etc.
 
Of course the fleet is learning! If you don't own a model S with autopilot, then you will not have the means to judge how the system works and how it is improving. I have AP in my P85D, and it is working even better now than it did end of October!
The car is not learning alone, it very much looks like a fleet learning to me. My car managed to navigate a complicated part of a highway without any problems just 3 weeks after I received the software update. When I drove on this part of the highway the first day I received the AP software, it asked me to take control of the wheel, 3 weeks later, at the same spot, and with the same weather conditions, absolutely no problem! The car handled this part of the highway perfectly, it even followed the temporary road works orange lines and ignored the normal white lines completely, as a human driver would do!
Musk said the following and I quote: "The whole Tesla fleet operates as a network, when one car learns something, the whole fleet learns something. the car should improve each week…you’ll probably notice difference after a week or a few weeks.
Tesla can push updates to the car without a full software update, they do this all the time with updates to the Google navigation maps by adding new supercharger locations etc.

Exactly. There are two kinds of updates:
- Data
- Software
 
As for the latter, new or updated software is neccessary to correct the behaviour of the Auto Pliot - not just "sharing" a lot of data amongst the Autopilot enabled Teslas around the world. To be useful the changes must of course be uploaded to the individual cars and can be done on the regular software update basis with the users accept. And this is important, as the user then will be informed of important changes of the Auto Pilot. IF Tesla just updated the cars (even with minor parameter adjustments) without informing the owners properly nobody would be sure how the Autopilot would react to any situations on a day by day basis - and that could be a dangerous road to drive so to speak.


This is where your assumptions break down - updates to the Autopilot data set don't require a regular software update that users accept. Autopilot absolutely does update with new data regularly in the background.
 
Said another way, machine learning updates can easily be pushed via server side changes after models have been retrained based on user data. This can happen without client side updates (i.e., new software updates). This is a standard deployment practice for technology companies.
 
I had earlier thought AP was learning, then in the last month it has been making many of the mistakes it made in the beginning. I think some of it is lighting. I live in the country and since the autumn leaves fell the low-angle sun sweeps the roads I travel with stripy shadows of the leafless trees. That may be confusing the system.