Thanks for the detailed write-up. There is one assumption we are all working on...and that's that deep learning networks, even assuming all necessary training data, are going to be able to learn and perform a FSD car to the number of 9's that are needed to let me sleep.
I am not 100% convinced this is true yet. it may be, but it may not be. I am probably around 90% certainty level which is good, but not fully convinced they can handle all cases even with all the proper training data.
I am talking about something more fundamental. Many here are talking about how great their chip/board is (it is great for sure) , how they lead in training data (they do by far!), and how amazing deep learning neural nets are (they are doing some cool things). BUT...no one seems to question what seems to be the default assumption that given: a great chip/board + gazllion miles of training data + deep learning NN == FSD car.
I'd like to add that I consider this a valid viewpoint: we cannot know whether it's possible up to the moment it's done.
There were a lot of doubters of SpaceX's "can rockets be landed safely and reused economically" thesis, and those were rocket scientists.
Also note that historically Elon expressed more doubt about the ability to reuse rockets than about FSD. (!) So if you trust Elon's off the cuff remarks about future technologies I think that should count for something.
Finally, if we are doubting whether a computer can drive a car safer than humans we should also consider how really, incredibly, stupidly bad drivers we humans are. We humans falsely believe that we can drive safely, and kill more than 1 million people a year trying ...
Even if the safety and regulatory bar is unfairly set much higher for an FSD car than for human drivers, isn't all that high.
So yes, I think this is another "computers won't be able to beat the chess world champion" kind of moment. They won't be able to, until they do, and then they leapfrog human chess players in ways that were hard to imagine originally.
Think about this: in two decades an FSD car will have enough processing power to file an insurance claim with estimated damages
before the (at that point inevitable) crash actually happens. It might perform an online data exchange with all the other vehicles involved in the (inevitable) crash to change its behavior in the final 500-1000 milliseconds before the crash: which other car to crash into, which passengers/occupants are more vulnerable and should be protected more.
If you are safe enough to get from point A to point B at non-annoying speeds with no passenger you're safe enough to do it with a passenger.
Sorry about my previous snarky reply, but I still think this is patently false if we consider safety an exercise in risk management.
If you're that good, and you gave customers the option to drive themselves and get there a little faster, how many would even take it? 90%+ would rather do e-mails or nap and arrive a couple minutes later.
Some people use taxis for leisure and convenience, but many people are using them due to urgency and the ability to arrive quickly point to point.
I also can't envision a licensing body allowing a vehicle that's too unsafe for passengers to drive unoccupied around other cars and pedestrians.
The point is that Tesla is free to make the business decision to require actual passengers to hold a driving license and act as a safety driver during a trip with passengers and possible valuable goods on board, at least in the initial phase. Just like AirBNB insists on the renting guest to hold a valid passport, credit card and requires them to sign a contract to be liable for damages at the host's home.
If the 'FSD car driving empty' experience is super safe to everyone but is simply
inconvenient to its occupants - for example there's a nonzero percentage of trips that end in a 'remote driver' having to intervene, or a car having to park itself because it cannot continue, then it would be an entirely valid decision for Tesla to not carry passengers in that phase - even if it was 100% safe according to safety thresholds they (or regulators) are using.
I.e. as others here have remarked on it, my point is that
the Tesla Network could be introduced as an automated car rental service initially. This already unlocks a healthy revenue stream, this market is ~28 billion dollars per year in the U.S. alone:
If we assume that the U.S. is about 20% all car rentals then the global market should be beyond 100 billion dollars.
Tesla would likely be able to match traditional car rental pricing, expand the market, and generate higher margins, due to the automated fleet management and the much more convenient "the rented car can drive to you" aspect.
Eventually they would offer full taxi service as well - but my point is that the "FSD rental car" technological milestone is easier to reach and is a stepping stone to an automated taxi service and "you can sleep in your car while it's driving" FSD autonomy levels.
Also note that 'empty car FSD' unlocks a number of service improvements for Tesla themselves:
- A Tesla that has developed a fault that requires service inspection could drive itself to the service center while you are at work, could be repaired there and could drive back by the time you need the car.
- "Test drives" could be automated to a large degree: anyone who signs up to the Tesla Network as a user would be able to experience the car for a few hours with no pressure and a modest taxi fee. (Tesla could even incentivize the first few test drives of new Tesla Network users.)
- New car delivery could be automated, the empty car could drive to the home of the owner, who would take delivery or send it back.
- Various convenience features could be implemented: 'FSD car wash' and detailing, where the car would drive itself to the car wash facility on a Friday during work hours to get prepped for a weekend trip, etc.
All of these would come with incremental revenue stream or an improvement in the owner experience.