Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Really? Did you bother looking at what you linked to? 2023 data:

View attachment 1042871

Every disengagement for a driverless Waymo was for an "In-field retrieval". i.e. exactly what @2daMoon said. (No amount of remote assistance being required is reported.)
The term disengagement is really only useful when there is a safety driver. A driverless car doesn't disengage itself, it might pull over and call for assistance but it stays in autonomous mode. Roadside assistance can manually disengage it when they arrive, that's what your spreadsheet reports . In theory a remote operator could notice an unsafe action and order the car to pull over, but that's really not how Waymo set up their system and I'm not aware of it ever happening.

Arguably both Waymo and Tesla make money from their autonomous driving systems,
Waymo loses massive amounts of money overall, of course. They don't disclose unit economics, but looking at their utilization rate, vehicle cost and operating infrastructure I think it's pretty clear they lose money on an operating basis in Phoenix and recently-started LA. I'd guess they also lose in San Francisco, but it's a closer call.

Are the Remote Operators considered "safety drivers" in that data?

If not, how are their interventions counted?
Remote monitors don't "intervene" (though I think in theory they can). They respond to requests from the car. In most cases they simply approve the car's planned action, e.g. go around a parked UPS truck. Sometimes the overrule the car. Sometimes they overrule the car and are wrong. Conegate is the most famous example, but this January a remote monitor mistakenly told a car to go through a red light. A moped also entered the intersection. The Waymo braked and avoided a collision, but the moped fell and slid.

Remote monitors mostly exist to keep the cars from blocking traffic too often. But bringing humans into the loop introduces human error. So it's a challenge. Tesla avoids this issue completely by always having a licensed driver in the driver's seat. If/when they start to pull drivers out they'll have to deal with the same issues.

That report isn't for driverless Waymos... Notice the column that says "vehicle is capable of operating without a driver"? It says no for all of those. Apparently, those vehicles only have an ADAS system installed, not fully-autonomous.
No, they have the full system. Waymo tests new areas, new code drops, etc. with safety drivers before they roll out to the driverless fleet. They put "no" in that column because that particular vehicle running those particular tests in that particular area is not approved for driverless.

Which begs to question, if a driver was in all of the driverless rides, how often would they disengage? Probably a lot more often than the "in-field retrievals". At a minimum likely every time remote assistance is required. This is where it seems like Waymo is "cheating".
No, they don't disengage when remote assistance is called. Safety drivers only disengage when needed for safety (thus the name) or when the vehicle is becoming a nuisance by blocking traffic or whatever. Or if they really need a taco.
 
Last edited:
I wouldn't call it cheating, per se. There has been no standard definition for "disengagement" assigned for assessing this.

It could very well be that Waymo has a very narrow classification they use for this term, and, it does not include remote drivers "helping" by correcting the vehicle's path while in operation. They may call that an "intervention" rather than a disengagement. Or, some other classification of their choosing.

It makes a kind of sense for them to do it this way.

Tesla, on the other hand, apply the term "disengagement" to any time an FSD Supervisor intervenes, as well as when FSD might have to disengage on its own. When the supervisor does intervene, FSD is no longer engaged, by design. The supervisor must re-engage FSD.

If Waymo are not counting remote intervention as a "disengagement" the numbers associated with these terms cannot be compared between Waymo and Tesla.
Devil's Advocate... and hear me out, but how far could Waymo (or any company) advance the solution by driving their vehicles (manually) around the country in order to collect a rough data sample from Cameras? It would be tiny in comparison (and very very late). This eventually becomes a quality vs quantity learning question to me.

In the not-so-distant future, does training quantity always correlate to robot performance, especially where time to market really matters? (One would assume so, given what happened with LLMs.) However, does it become gradually less important over time with the emergence of AGI - in the tactile domain especially for Humanoid Robots?

Let's take your smartest friend that always seems to "catch on" or "play it on guitar by ear". Now 10x that ability - it's not magic. Show it once, they got it, and can repeat it typically, if not better. This will also ring true in the future of Autonomy with AI in the do-something tactile kinetic space. Clearly the approach would need a comprehensive library of AGI tactile skills, but some are actually working this strategy today. (Key word is "working on it".)

Therefore, as AGI improves, Gap Training for Next Gen FSD gets smaller as I see it. I see a possible shortcut here. It's a Hail Mary and super late, but if I were in any sort of role to compete, I'd be finding data and securing H100s ASAP, and long ago. Maybe Figure01 or others like it are training on these AGI skills right now.

And why retrofit cars with FSD (or design it in for min 3 yr delay) when you can just get a robot to drive any existing vehicle on the road today? I actually am starting to convince myself that Tesla will offer this as their FSD solution for an even bigger impact on road safety. In fact, Tesla vehicles become safer when other vehicles are safer. Crazy huh?

The future has only started. We're catching up on I'd say about 50 yrs from our past stagnant, unchallenged, monopolistic, corporate greed economy out there. Eye's on the Factory folks. I think we get the double boom sometime this year, maybe even tomorrow. 👀🚀🚀👨‍🚀👩‍🚀
 
Remote monitors don't "intervene" (though I think in theory they can). They respond to requests from the car. In most cases they simply approve the car's planned action, e.g. go around a parked UPS truck. Sometimes the overrule the car. Sometimes they overrule the car and are wrong. Conegate is the most famous example, but this January a remote monitor mistakenly told a car to go through a red light. A moped also entered the intersection. The Waymo braked and avoided a collision, but the moped fell and slid.

This is exactly what makes any comparison of Waymo to FSD disengagements an apples to oranges situation.

FSD won't ever ask the supervising driver to approve a maneuver.

Waymo will pause and make a request for feedback from a remote operator, yet this isn't considered disengaging.

Thanks for the deep dive!
 
Is everyone (as well as the lurkers) who do not have FSD watching every vid from Whole Mars Catalog? Well ya should! I only watch his commentary ones though, as it gives me a better understanding of v11 to now since I've never had FSD. These vids have made me bullish and allowed me to accept Elon's "balls to walls" direction. And in about two weeks I'm going to upgrade my 2018 M3 to HW3 so I can help the company gain real world data. My wife doesn't want me to spend the money, but I feel as an investor I must.

Agreed that as an investor of any significant amount, you should try FSD v12 firsthand (any end-to-end NN version will do). I made myself let it take over a few times during this free trial month, and was blown away. NOT that it is "robotaxi ready" right now, cause our version ain't yet, but that it is so far along that the light at the end of the tunnel is visible, and growing rapidly enough that, for the shortzes, it might well be the headlight on an oncoming freight train.
My $0.02. Not financial advice.
 
This FSD introduction to China sure is a good bit of business. It seems to rely on the fact that there is no intersection in the Venn Diagram between sensitive sites like military installations and tricky intersections. Tesla has no need or desire to have information on sensitive sites. It could probably get by just fine by even excluding large swathes of the country like Beijing from its training set.
 
how much did that roof cost you? I was quoted $185k (lol) last year for a roof in the panhandle.
Forward Observer

Over 2022 and 2023 we replaced a fifty year metal roof with composite (thick), and later 60 solar panels with four PowerWalls. The metal roof at thirty years leaked. I dug under every rock and could not come up with a Tesla Solar tile roof. The new composite roof cost $80K and the solar side cost $110 ~ one hell of a lot of shares mind you. So, your $185K panhandle with too many cooks may not look so bad.

Cheers

PS ~ spent four plus years in the not so Okay part of panhandle
 
I've always leaned heavily bullish on Tesla and the two biggest data points we recently received were:
1. Last week Ron Baron was (in my opinion) almost giddy in his excitement for Tesla and when pressed on a timeline he exclaimed, "now! The time is now!" He further stated Tesla was at a bottom, something I don't recall him ever saying.

2. Elon said they spent $1B on compute in Q1 and they expect to spend $10B this year. This is signal! They're expecting to increase compute spending by 150%. Tesla and Elon will always assess the risk vs reward and if they're willing to hit the gas on capital spending by this amount when they've been very conservative with their cash balance, they're showing us that the risk reward on this passes the sniff test.
 
It was physically painful to rip up the couch and get into leap calls @144. Wish I had more furniture....
That's conviction! I debated this but it burned a couple times. Now watching one remaining call grow finally... one lol. My only leap too. So leaps might just be the trick. I'm not convinced this isn't some dream though without the car driving itself somewhere soon. But then it's clearly game over.
 
Matt Smith on X:

I've gotta say, the $TSLA panic sellers have a fair amount of egg on their face right now. It's not as if Elon just decided to pivot to AI last week. It's not as if V12 was an empty promise that we couldn't actually try firsthand. All this information was out there, and we could actually experience it for ourselves.

The stock was plummeting right at the moment when we could verify for ourselves that FSD had reached escape velocity.

The near-term concerns were playing much too significant of a role in investors' headspace. We want to be adding when sentiment is low, not capitulating.


P.S. 200 million shares traded by 2:30 p.m.
 
Last edited:
That's conviction! I debated this but it burned a couple times. Now watching one remaining call grow finally... one lol. My only leap too. So leaps might just be the trick. I'm not convinced this isn't some dream though without the car driving itself somewhere soon. But then it's clearly game over.
I am also skillful at catching falling knifes, as I started @180. It felt meh. Doubling down at 144 felt viscerally painful, as if someone punched me in the stomach. That's my clue to double down... The palms have healed today, and look very green : )
 
I was unaware that loss severity statistics for automated systems were higher. That would be interesting to learn more about.

Would there be a balance point where the excess of costs due to severity could be outweighed by the reduced frequency of less severe incidents?
There are numerous types of risk for which loss severity and loss frequency are directly correlated. Examples include most classes of storm risk, where high frequency and high severity are directly correlated. hence, just try to buy storm risk in South Florida.

In automotive it varies by geography. Other things remaining equal major urban areas have high frequency of accidents but slightly lower severity.

Highly automated systems are different. Well-designed automation invariably reduces loss frequency. That applies to robotic surgery, driver automation, aircraft automation and many more too numerous to name. Just think about any arena in which well designed automation si deployed. Every time human error decrease loss frequency decreases, often by major proportions.

Take robotic surgery as an example, in major surgery, say, removing a bladder full of tumors and replacing with a new-bladder from the patients own large intestine. That, even a decade ago was and eight hour marathon with surgeons in shifts. Now the surgery lasts half as long, is far more precise and far less stressful for everyone concerned. Were loss severity not a factor we'd assume surgeon liability insurance would plummet in cost. Not so fast, if a robot makes a mistake...then there is a huge predisposition to blame the machine and the imprudent people who deployed it. Now there are entirely new liability targets and vastly more consequential loss severity. In the case, I am told reliability, all that new gear has reduced error rates by huge promotions but increase the loss severity by a major factor. Thus, pretty much a rate wash, I am told.

Similarly, commercial aircraft insurance and shipping insurance have had major rises in rates, not because of frequency but precisely because they are historically such attractive low risk. A couple B737Max crashes and a ship destroying the Key bridge and suddenly there are massive claims upon massive claims, Those were only three events. Three. They are coupled with ships being hijacked in the Red Sea, tornados, floods, hurricanes and earthquakes. Suddenly nearly every insurance risk is rising, despite quite low frequencies.

The problems are that loss severity when it comes is incalculably high. Traditionally reinsurers take what are called 'excess' policies at cheap rates. But, all those unrelated things suddenly have happened so reinsurance markets are severely impaired (this is a very long discussion, I'll skip it except to say Lloyd's).

Now we have a device that demonstrably reduces risk of accidents and injury. It's new but seems magical (rather like robotic surgery, in a way) so we expect rates to drop. Nope, not anytime soon. When the inevitable accident happens and somebody is killed, what happens. Just like in robotic surgery, suddenly there are new wealthily pockets to pick and compelling stories about being killed by an infernal machine.
It takes little imagination to understand, then, that loss severity rises.

That is the way it works. Nothing arcane. It's really simple. The more dramatic effect of the loss before a jury, the higher the loss severity. Therefore automation proven benefits are indisputable, but when a problem happens...drama ensues. Actuaries HATE drama!
 
Why would FSD cause loss severity go up for car crashes? I don't understand that logic at all.
Explained in another post. it is not that accident severity will go up. NOT! it is that when an accident happens everyone involved wants to sue the deepest pockets. That means loss severity increase even while accident severity decrease.

In other words, the rarer the accident is the higher the claim when it does. Think Boeing 737Max, Exxon Valdez etc. This is not arcane, but raw economic reality when a rare accident actually happens.
 
What took them so long I wonder? Trying to maintain earnings? Or just a lack of H100 Supply? Or Dojo delays, who knows.
And who beat them to it, Meta? 😡
Capex doesn't directly affect earnings, of course. Lack of compute supply is likely a factor, but I think they have seen something in the progress of the FSD model that gives them the confidence to floor it. I know how much better current version is and can only imagine where they have come in the alpha ver.
 
  • Like
Reactions: SOULPEDL
Explained in another post. it is not that accident severity will go up. NOT! it is that when an accident happens everyone involved wants to sue the deepest pockets. That means loss severity increase even while accident severity decrease.

In other words, the rarer the accident is the higher the claim when it does. Think Boeing 737Max, Exxon Valdez etc. This is not arcane, but raw economic reality when a rare accident actually happens.
OK, I see your point about suing someone with deep pockets.

So for FSD, this wouldn't apply until Tesla is ready to take liability with unsupervised FSD.

As long we are talking about supervised FSD then nothing has changed from what we have today. So if supervised FSD is safer than a driver alone, your insurance rates should go down. If Tesla is taking liability then you insurance rate goes way, way, way down.