Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I think the red car is an S, solely based on the shape of the small window in the C pillar.

Edit: spelling.
Yup, That red beast is an S
Screen Shot 2023-09-09 at 3.14.26 PM.png
 
Dumb question (or please spell out acronyms, @CorneliusXX ): what is "RV" in automotive that isn't "Recreational Vehicle"?

Edit: I guessed it: "Resale Value"
Apologies, it appears I have made an ASS of myself. Finance is riddled with acronyms.

Here's the definition:
  • Residual Value ("RV"): A notional future expected resale value of the vehicle set by the lessor at the time the lease is entered into - It's their estimate of what they reckon the vehicle will be worth at the time the lease ends.
    • It's important to the lessee as the monthly payments have to cover the difference in value between the purchase price and the RV + interest on the total value
    • It's even more important to the lessor because once they get the vehicle back at the end of the lease they have to sell it. If they set the RV too high they lose money on the sale, too low and they will be uncompetitive against other lessors when calculating monthly payments and not get the business.
    • It's called a balloon payment if you have a loan rather than a lease - but on a loan the borrower is usually on the hook to pay it regardless of it's accuracy.
 
Last edited:
Musk, who liked to manage by decreeing what metrics should be paramount, gave them their lodestar: The number of miles that cars with Full Self-Driving were able to travel without a human intervening. “I want the latest data on miles per intervention to be the starting slide at each of our meetings,” he decreed. He told them to make it like a video game where they could see their score every day. “Video games without a score are boring, so it will be motivating to watch each day as the miles per intervention increases.”
I seem to remember some of you telling me that human interventions don't matter. You said the system learns just fine using shadow mode.

Guys, shadow mode is only part of the story.

Rising miles per intervention is what Tesla wants to see. I am now more convinced than ever that a large part of the motivation for allowing FSD transfer and for lowering the price of FSD was to get more beta testers on hardware 4 ASAP. Tesla needs to measure real human interventions on both platforms.
 
So from the latest excerpt some new information was gained:

By mid-April 2023, it was time for Musk to try the new neural network planner. He sat in the driver’s seat next to Ashok Elluswamy, Tesla’s director of Autopilot software. Three members of the Autopilot team got in the back. As they prepared to leave the parking lot at Tesla’s Palo Alto office complex, Musk selected a location on the map for the car to go and took his hands off the wheel.

When the car turned onto the main road, the first scary challenge arose: a bicyclist was heading their way. On its own, the car yielded, just as a human would have done.
For 25 minutes, the car drove on fast roads and neighborhood streets, handling complex turns and avoiding cyclists, pedestrians and pets. Musk never touched the wheel. Only a couple of times did he intervene by tapping the accelerator when he thought the car was being overly cautious, such as when it was too deferential at a four-way stop sign. At one point the car conducted a maneuver that he thought was better than he would have done. “Oh, wow,” he said, “even my human neural network failed here, but the car did the right thing.” He was so pleased that he started whistling Mozart’s “A Little Night Music” serenade in G major.


So FSD v12 was good enough to be be driven by Musk by mid april. Since then they have been improving the system until we got to see the demo in aug with one intervention in a 40min drive. I guess this is a bit disappointing since we thought the big leap happened more recently, but also shows that Tesla has been putting a lot of work into this and it was not such a risky demo as it seemed.

During the discussion, Musk latched on to a key fact the team had discovered: The neural network did not work well until it had been trained on at least a million video clips. This gave Tesla a big advantage over other car and AI companies. It had a fleet of almost 2 million Teslas around the world collecting video clips every day. “We are uniquely positioned to do this,” Elluswamy said at the meeting.

Here it sounds as if Elon only liked it because Tesla had an advantage this way. More like it's because he saw the potential of the system by extrapolating in his head what it would be able to do with even more data and compute. And it's not a million video clips that's needed, it's a million video clips of challenging situations where experts drivers have been driving expertly. They have all the failure cases from V1 to V11 and all the data they used to fix them to start with, that's their main advantage here.

My guess is that around a year ago or something a side project of end2end started showing primise they got more resources around the end of last year and by april they were confident enough to let Musk try it. Musk was sold and said that from now on, that would be their main priority. Then somewhere between april and the demo it was clear that V12 would overtake V11 on miles/intervention and around now it actually is better than V11 at driving but needs more validation before it goes wide(ie it may not require intervention as often, but it may have more catastrophic failures).

Overall it makes me more bullish for FSD but little bet bearish on the timeline until customers can try it.
 
I seem to remember some of you telling me that human interventions don't matter. You said the system learns just fine using shadow mode.

Guys, shadow mode is only part of the story.

Rising miles per intervention is what Tesla wants to see. I am now more convinced than ever that a large part of the motivation for allowing FSD transfer and for lowering the price of FSD was to get more beta testers on hardware 4 ASAP. Tesla needs to measure real human interventions on both platforms.
Indeed. There’s another reason too. The intervention we saw on Elon’s FSD 12 build was the car stopped at a red light. Left turn signal turned green, and the car made a mistake and started moving forward on the still red light. NN that do not have a rules overlay will make mistakes like this and it isn’t clear if the car would have ever stopped in time because it had NO training data of good human drivers proceeding from a stop on a red light.

There’s only three ways to fix this FSD 12 mistake.

1. Train with more good driver videos so that it doesn’t make that mistake again. But there’s no certainty you’ve got it nailed. There’s always a chance it will will screw up and then the system doesn’t know what to do once it is moving in an intersection on a red since it’s never seen that situation before. Absent cross traffic, it is likely to just keep running the red (and risk getting t boned by a fast inbound car).

2. Train (probably in a simulation) situations where the system initially made a mistake like proceeding on a red, and show it what to do (like stop suddenly). But you need lots of intervention data to find out where the system screwed up.

3. Personally, I don’t think either 1 or 2 will work well enough to be perfect. You need to do what humans do. Experienced drivers are usually driving on a human equivalent of auto-pilot, where you can be deeply thinking of something, or on a complex phone call, and your unconscious NN just drives the car. Indeed, I’ve been in that state, and guess what, sometimes my unconscious auto pilot NN will make rules of the road mistakes like starting on a red light, or the one that 95% of all drivers do, not follow rules on a 4 way stop sign when all directions are busy. Anyways, what happens in the really bad situations, like starting on a red light, my conscious part of the brain, the part that knows road rules, intervenes and takes over from my unconscious auto pilot.

Similarly, I think Tesla will have to have C code looking over the shoulder of the NN and take over when the NN makes bad mistakes. The C code can then give control back over to NN when it has handled the situation. The nice thing about this emergency supervisor is that Tesla pretty much already has that code in the form of v11. You just have to add take over and give back control code based on bad NN decisions.

4. OR program a more complex NN like an LLM level that understands the rules of the road and that part looks over the shoulder of the driving NN.
 
So from the latest excerpt some new information was gained:

By mid-April 2023, it was time for Musk to try the new neural network planner. He sat in the driver’s seat next to Ashok Elluswamy, Tesla’s director of Autopilot software. Three members of the Autopilot team got in the back. As they prepared to leave the parking lot at Tesla’s Palo Alto office complex, Musk selected a location on the map for the car to go and took his hands off the wheel.

When the car turned onto the main road, the first scary challenge arose: a bicyclist was heading their way. On its own, the car yielded, just as a human would have done.
For 25 minutes, the car drove on fast roads and neighborhood streets, handling complex turns and avoiding cyclists, pedestrians and pets. Musk never touched the wheel. Only a couple of times did he intervene by tapping the accelerator when he thought the car was being overly cautious, such as when it was too deferential at a four-way stop sign. At one point the car conducted a maneuver that he thought was better than he would have done. “Oh, wow,” he said, “even my human neural network failed here, but the car did the right thing.” He was so pleased that he started whistling Mozart’s “A Little Night Music” serenade in G major.


So FSD v12 was good enough to be be driven by Musk by mid april. Since then they have been improving the system until we got to see the demo in aug with one intervention in a 40min drive. I guess this is a bit disappointing since we thought the big leap happened more recently, but also shows that Tesla has been putting a lot of work into this and it was not such a risky demo as it seemed.

During the discussion, Musk latched on to a key fact the team had discovered: The neural network did not work well until it had been trained on at least a million video clips. This gave Tesla a big advantage over other car and AI companies. It had a fleet of almost 2 million Teslas around the world collecting video clips every day. “We are uniquely positioned to do this,” Elluswamy said at the meeting.

Here it sounds as if Elon only liked it because Tesla had an advantage this way. More like it's because he saw the potential of the system by extrapolating in his head what it would be able to do with even more data and compute. And it's not a million video clips that's needed, it's a million video clips of challenging situations where experts drivers have been driving expertly. They have all the failure cases from V1 to V11 and all the data they used to fix them to start with, that's their main advantage here.

My guess is that around a year ago or something a side project of end2end started showing primise they got more resources around the end of last year and by april they were confident enough to let Musk try it. Musk was sold and said that from now on, that would be their main priority. Then somewhere between april and the demo it was clear that V12 would overtake V11 on miles/intervention and around now it actually is better than V11 at driving but needs more validation before it goes wide(ie it may not require intervention as often, but it may have more catastrophic failures).

Overall it makes me more bullish for FSD but little bet bearish on the timeline until customers can try it.
I do wonder how much faith Elon has had in AI. Yes, he built a team to use narrow AI for cognition, but the whole FSD effort until this year was massively complex with 300,000 lines of C code to actually do the planning and driving. Meanwhile George Hotz and others were doing end to end NN driving years ago.
 
  • Like
Reactions: MikeC and La_crolle
The FACT is that NH, VT and ME must do out-of-state transactions to purchase a Tesla, and there are others. So what?
Let's see: what's the difference?

Tesla's HQ is now in Texas, and they've spent around $2 billion there for the Giga Austin factory IIRC. Then there's the lithium refining facility that's being built as I type this, for another several hundred million . . . .

One would think that the Texas legislature would consider making it legal for Tesla to sell their products to Texas residents, but that's a bridge too far for them as the Texas Auto Dealers Association makes those fat campaign contributions.

It's our little Third-World hamlet in the US.
 
I do wonder how much faith Elon has had in AI. Yes, he built a team to use narrow AI for cognition, but the whole FSD effort until this year was massively complex with 300,000 lines of C code to actually do the planning and driving. Meanwhile George Hotz and others were doing end to end NN driving years ago.
That 300.000 lines of C code is V11 which is very good for benchmarking and testing V12.

IMO they could set up a lab and send a simulated video feed into V11 and V12 then compare the outputs and see when the different software versions make different decisions.

Same thing for old / new versions of V12.

Anything that has previously caused an intervention can be part of the video simulation for testing.

Overall I think Tesla has a plan for FSD and we don't need to worry about the low level details, the way most systems become more reliable is via more testing, in general FSD improves via learning from its mistakes.

For Robotaxis insurance initially pays for any mistakes, which typically don't have drastic outcomes in low speed urban environments. FSD is well tested in higher speed highway environments which are typically less complex.

Significantly better than an human is the aim, rather than zero accidents, the regulators choose what level in the march of 9's is good enough.
 
I do wonder how much faith Elon has had in AI. Yes, he built a team to use narrow AI for cognition, but the whole FSD effort until this year was massively complex with 300,000 lines of C code to actually do the planning and driving. Meanwhile George Hotz and others were doing end to end NN driving years ago.

As a side note for those without a software background, I want to point out that 300,000 lines of C code really isn't all that much. The Linux kernel has over 30 million lines. And that's just the kernel.

Also, lines of code are not created equal. It's what those lines do that is important. And sometimes you can do something really complex and elegant in very few lines of code.

When Elon said they were replacing 300,000 lines of code, he wasn't telling us anything. When he said they were replacing those lines with end-to-end neural nets, that spoke volumes.
 
Last edited:
Let's see: what's the difference?

Tesla's HQ is now in Texas, and they've spent around $2 billion there for the Giga Austin factory IIRC. Then there's the lithium refining facility that's being built as I type this, for another several hundred million . . . .

One would think that the Texas legislature would consider making it legal for Tesla to sell their products to Texas residents, but that's a bridge too far for them as the Texas Auto Dealers Association makes those fat campaign contributions.

It's our little Third-World hamlet in the US.

Oh, but you are mistaken. Many of the legislators own car dealerships themselves. They don't need to be bought by campaign contributions.

You really should do more research before going off on a rant.

Besides, you can find this same sort of thing anywhere. It isn't unique to Texas. I'd be willing to wager that you may have found all the faults with the last state you lived in before migrating to Texas. Probably went on about it and those subjected to such a rant there rolled their eyes at you like some may do on this forum.

:rolleyes:

Have you considered the possibility of doing something about it yourself, rather than going on and on about these several issues which trouble you in the hope that somebody else will step up and take care of it for you?

In case you are young, and simply haven't realized that change, particularly in government, happens very, very slowly, come back in a decade and there might be some discernible degree of progress.

If you want to be influential, perhaps adding some Dale Carnegie to your reading list might be a good start.
 
Last edited: