I am prepared to be told by y'all that I am crazy. Maybe I am - but hear me out. And let me make something really clear - I wish Joshua Brown was alive and that his accident had never happened. But now that it has happened the past cannot be undone - and we ask ourselves - what next?
Tesla implemented Autopilot in an open regulatory environment - allowing it to gather a data set of hundreds of millions of miles to train its neural networks free of interference from the media, politicians and regulation. Was it risky? Yes - as we have all found out there was a really horrible corner case lurking.
But how do we know how many other corner cases were learned of and discovered prior to the fatality? What if the only way to discover them is through large scale testing that may now become politically impossible for latecomers?
Yes, pioneers often get shot in the back - but sometimes first movers do have big advantages.
Neural networks can be quickly trained to be 99% effective on a relatively small data set. For example, Nvidia just published a paper in April which shows that its Drive PX2 super computer system was trained end-to-end to drive in less than 100 hours only by comparing a video feed to steering input. This differs markedly from Mobileye's task-compartmentalized, annotated-image training approach.
But of course, 99% isn't good enough - as the media is now screaming about.
If you really want statistical safety then the network must be exposed to a large enough data set that corner cases which no human can predict are found through simple trial and error.
Also - remember that even the scientists don't truly understand how their own neural networks work - they openly acknowledge this. To some degree tuning these algorithms - like the human brain - is an experimental art from - they are closed boxes in the sense that we cannot predict with 100% accuracy what a neural network will output for any given set of inputs.
Only Tesla can now say to regulators that their neural network has been "hardened" through hundreds of millions of miles of testing - nobody else can make that claim.
If this is true, then Tesla has gathered the data it needs to build a statistically safe neural network - but the fatality has possibly temporarily slammed the door shut on its competition because anybody who tries to copy Tesla's methodology - a fleet learning neural network - is going to get shut down by politicians and the press saying "DIDN'T YOU IDIOTS SEE WHAT HAPPENED TO TESLA? HOW DARE YOU?"'
The competition will have to run their autopilots in simulation mode. Even now - almost 2 years after Autopilot launched - the competition's lane assist systems are not learning anything once in their customer's fleets. Anyone who wants to replicate what Tesla has done will have to start from scratch.
We do not know exactly what level of proprietary neural network training Tesla is using - but for those of you who have not closely followed the hints dropped by mobileye and Tesla, here are a few key ones:
1 - Mobileye said in a public presentation (I believe at January's 2016 CES talk) that EyeQ3 is the first SOC to have a neural network which not only processes information in realtime after being trained back at "headquarters" - but which also is capable of being set up to learn on the fly in customer cars. It refers to this as "DNN" - and the media widely reported it as a "coming soon" feature of EyeQ4 - but Mobileye's on CEO said in his talk that Tesla already implemented it in Autopilot 1.0 using EyeQ3.
2 - Mobileye has also said that Tesla is the only automaker implementation so far to be using this neural network learning.
3 - Mobileye said this in January 2016 - a full 18 months after Tesla began gathering data in October of 2014.
What does this mean moving forward? Tesla can rightly tell the public that it has built a statistically safe system, and that the only way to do it is with hundreds of millions of miles of data.
Thus Tesla can argue that it should be allowed to continue operating Autopilot - but that the rest of the industry should go through the same fleet learning and high definition map building before they release systems.
However, it may now be politically impossible to launch such a project again for any other automaker - except in pure simulation mode - where an autopilot runs in the background comparing its own actions to that of the human driver.
If this is true - then nobody else will be able to catch up to Tesla's real-world data set for at least a couple more years - and even more importantly they won't be able to offer to the public a functioning autopilot equivalent to Tesla's for at least a couple more years.
The short term hit to Tesla's reputation has been costly - true. But Tesla can rightly tell lawmakers that it is the only carmaker which has this robust data, and is enhancing its system even further, ensuring that Mr. Brown's incident never happens again.
But who else can say that? Nobody. Who else, right now, can say they have a functional high definition lane-by-lane map of at least a portion of American roads? Nobody - Mobileye and others are launching map building projects but they won't bear fruit for some time.
Tesla implemented Autopilot in an open regulatory environment - allowing it to gather a data set of hundreds of millions of miles to train its neural networks free of interference from the media, politicians and regulation. Was it risky? Yes - as we have all found out there was a really horrible corner case lurking.
But how do we know how many other corner cases were learned of and discovered prior to the fatality? What if the only way to discover them is through large scale testing that may now become politically impossible for latecomers?
Yes, pioneers often get shot in the back - but sometimes first movers do have big advantages.
Neural networks can be quickly trained to be 99% effective on a relatively small data set. For example, Nvidia just published a paper in April which shows that its Drive PX2 super computer system was trained end-to-end to drive in less than 100 hours only by comparing a video feed to steering input. This differs markedly from Mobileye's task-compartmentalized, annotated-image training approach.
But of course, 99% isn't good enough - as the media is now screaming about.
If you really want statistical safety then the network must be exposed to a large enough data set that corner cases which no human can predict are found through simple trial and error.
Also - remember that even the scientists don't truly understand how their own neural networks work - they openly acknowledge this. To some degree tuning these algorithms - like the human brain - is an experimental art from - they are closed boxes in the sense that we cannot predict with 100% accuracy what a neural network will output for any given set of inputs.
Only Tesla can now say to regulators that their neural network has been "hardened" through hundreds of millions of miles of testing - nobody else can make that claim.
If this is true, then Tesla has gathered the data it needs to build a statistically safe neural network - but the fatality has possibly temporarily slammed the door shut on its competition because anybody who tries to copy Tesla's methodology - a fleet learning neural network - is going to get shut down by politicians and the press saying "DIDN'T YOU IDIOTS SEE WHAT HAPPENED TO TESLA? HOW DARE YOU?"'
The competition will have to run their autopilots in simulation mode. Even now - almost 2 years after Autopilot launched - the competition's lane assist systems are not learning anything once in their customer's fleets. Anyone who wants to replicate what Tesla has done will have to start from scratch.
We do not know exactly what level of proprietary neural network training Tesla is using - but for those of you who have not closely followed the hints dropped by mobileye and Tesla, here are a few key ones:
1 - Mobileye said in a public presentation (I believe at January's 2016 CES talk) that EyeQ3 is the first SOC to have a neural network which not only processes information in realtime after being trained back at "headquarters" - but which also is capable of being set up to learn on the fly in customer cars. It refers to this as "DNN" - and the media widely reported it as a "coming soon" feature of EyeQ4 - but Mobileye's on CEO said in his talk that Tesla already implemented it in Autopilot 1.0 using EyeQ3.
2 - Mobileye has also said that Tesla is the only automaker implementation so far to be using this neural network learning.
3 - Mobileye said this in January 2016 - a full 18 months after Tesla began gathering data in October of 2014.
What does this mean moving forward? Tesla can rightly tell the public that it has built a statistically safe system, and that the only way to do it is with hundreds of millions of miles of data.
Thus Tesla can argue that it should be allowed to continue operating Autopilot - but that the rest of the industry should go through the same fleet learning and high definition map building before they release systems.
However, it may now be politically impossible to launch such a project again for any other automaker - except in pure simulation mode - where an autopilot runs in the background comparing its own actions to that of the human driver.
If this is true - then nobody else will be able to catch up to Tesla's real-world data set for at least a couple more years - and even more importantly they won't be able to offer to the public a functioning autopilot equivalent to Tesla's for at least a couple more years.
The short term hit to Tesla's reputation has been costly - true. But Tesla can rightly tell lawmakers that it is the only carmaker which has this robust data, and is enhancing its system even further, ensuring that Mr. Brown's incident never happens again.
But who else can say that? Nobody. Who else, right now, can say they have a functional high definition lane-by-lane map of at least a portion of American roads? Nobody - Mobileye and others are launching map building projects but they won't bear fruit for some time.
Last edited: