Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Panasonic recently abandoned the 4680 format in their new plant in Oklahoma as I recall. Given all of the benefits of the 4680, they must not have been convinced that the production problems were going to be resolved in a timely matter.

Along with @Knightshade's articles, Panasonic 4680 is a different design than Tesla's. Theirs uses 5 tabs and Panasonic does not have the DBE tech nor other improvements.
 
Anyone thinking EVs will overtake ICE anytime soon is mistaken. This transition will take many more years than what my neighbourhood tells me. It felt like 2018 again as strangers would ask me about my EV and take photos of it.

Small disagreement, price and availability of EVs is a big factor in the pace of the transition.

In Australia we have recently had the 3 small EVs competing strongly on price in the around $40,000 AUD price category for the first time, Ora Cat, MG4, BYD Dolphin. The price gap between a new ICE and a new EV is closing all the time. People will do their sums on ROI, factoring in fuel and maintenance savings.

Tesla Gen3 is the next step, I expect it to complete strongly with EVs in the price category above but also to ramp production volumes fast enough top move the needle.

I also think Cybertruck will surprise many people, and change some people's perceptions of EVs.

Patchy adoption of EVs with some regions moving faster than others is common worldwide, both the fastest and slowest adopters set the rate which is ultimately the average rate of adoption.

My overall point is, price and availability impact on the average rate of adoption.

Vehicle model choice and fast charging availability also impact on the rate of adoption, the long term trend is; more choice, lower prices, more fast chargers.
 
Along with @Knightshade's articles, Panasonic 4680 is a different design than Tesla's. Theirs uses 5 tabs and Panasonic does not have the DBE tech nor other improvements.
I will just add that the 4680 is a form factor, much like a AA battery. The process and internal design will vary quite a bit depending on the manufacturer and how it is being used.
 
So many good questions unasked and unanswered at Q2 CC....
Here's what I want to know....

  • Is "solving" FSD in 2023 contingent on having Dojo operational?
  • Elon was quoted recently as having said solving FSD was easier than previously thought...what is that in reference to?
  • Will we get a repeat of the steel ball vs window at the CT delivery event? Musk tweet from 2019 “Should have done steel ball on window, *then* sledgehammer the door. Next time..."
I've got plenty more...
 
Another drop on Monday because Mr. Market doesn't like changes? Shout out to all the early adopters!


PXL_20230724_004205999~2.jpg
 
Elon was quoted recently as having said solving FSD was easier than previously thought...what is that in reference to?
While we can't be certain, one likely explanation is software 2.0 moving more of the decision making into the Neural Nets where it is improved by training.

My simplistic understanding is, - If FSD simply copies the decisions good drivers make in particular situations, it is likely to get more things right.

When FSD gets something wrong, Tesla simply sources more training data for that particular area and retrains the NNs until it gets it right.

It is no real surprise to me that programmers trying to code logic for FSD can't anticipate all edge cases, but sufficient training data can cover them.
 
Elon was quoted recently as having said solving FSD was easier than previously thought...what is that in reference to?
My take is that they were breaking the problem into smaller pieces and then trying to NN each of those. Instead, they use one big end to end NN without intermediate human comprehendable meanings.

Basically, they were trying to presolve the problem and making things harder on themselves.
 
My take is that they were breaking the problem into smaller pieces and then trying to NN each of those. Instead, they use one big end to end NN without intermediate human comprehendable meanings.

Basically, they were trying to presolve the problem and making things harder on themselves.
Don't know if you're right or not, but, if you are, your explanation makes all sorts of sense.

If there's one thing that I'm very aware of when it comes to NNs, it's that they're exceedingly non-linear, at least in the effects of linear variations in the weights in the NN causing massively variable results in the outputs.

It's got to be ridiculously complex to train this thing, though. It's like the Spherical Cow syndrome: A Physicist, when given some problem like, I dunno, "Drop a cow from 10,000 feet. What's its terminal velocity?" Their first step is, "Assume a spherical cow. Then.." And, for that problem, they'll be roughly right. Minor variations on the shape of the cow probably don't affect the Splat! all that much. And a NN given this kind of problem would do the same thing: Delete all the legs, horns, fur, etc., and get a vaguely correct result.

But, now, we have a car transversing downtown, with pedestrians, dogs, baby carriages, other cars going into/out of parking spaces, coming at right angles, and, well, everything, all the time, all at once. How much sphericity can a problem like this take? Dunno. Be interesting to see what they come up with.
 
Place your bets, what Quarter&Year will the Model Y pass the F150 in the US?

Haha, never? Because the Model Y competes with Mach E and Cybertruck competes w. F150. For the most part, people don't cross shop mid-size SUVs with full-size pickups.

Now if we rephrase the question to when Cybertruck passes F150 in unit sales per year, I'd say 4 years depending on whether you're lumping in all the F-series trucks (F-250, 350, ...) with F-150.

Either way, the goal is to kill the gas/diesel stranglehold on the N.American truck market. When it's no longer cool for a high school boy to want an F-150, you'll know we have won.

Cheers!
 
It’s false that Tesla has run out of compute on HW3.

It's really not.

They ran out of compute on one node in mid-2020 and have had to spill over to use the second node ever since. See the full thread here for where Green mentions this has been the case since then and not getting better.

And that's a system VASTLY less capable than the L4 system promised to pre 3/19 buyers.

So the idea they'll somehow magically add a ton of functionality that currently does not exist at all and ALSO reduce required compute by at least 50% to fit back in a single node (since you need redundancy as per AI day presentation to run without a human in the car) is magical thinking unsupported by any evidence.





I don’t remember which video in particular, but James Douma debunked this very thoroughly

<citation required>

And ideally a citation that has more substance than "they'll somehow just make everything way more efficient *waves hands*"

Again, nobody knows how much compute is actually needed for L4 driving on a Tesla. Not Elon, and certainly not James Douma who AFAIK has even less visibility to Tesla code than Green does....and that will be true of everyone until they actually achieve it. Which they have not.



he has WAY more credibility than posters in the FSD area.

Douma is the guy who told us how they didn't need LIDAR but that RADAR was super useful to FSD and it totally made sense they used it.... right up until Tesla announced they're removing radar... at which point he did a Dave Lee video telling us how no, just kidding, Radar wasn't really super useful at all and it totally makes sense to remove it.

But we do know they're WAY past using all the compute available in a single HW3 node, even at L2, and have been for roughly 3 years now.

In fact, hilariously, Douma is one of the original sources for HW3 having run out of compute years ago:

 
Last edited:

I'm not sure if this video has been posted, but comments from Larry around 3:30 to around 6:40 are very interesting..

Will the licensing of Tesla tech be limited to fast charging and FSD, or will it extend to software, and some hardware components?

Legacy, US, EU and Japanese auto needs to be able to compete with Chinese EVs, licencing tech from Tesla is a way of jumping forward in terms of software, efficiency, and features, with low a R&D spend, fast timeframe, and minimal risk.

When Tesla is selling software, that is a high margin business. Tesla can ensure that their software can only be used for EVs.

It would be a win for Tesla, a win for the mission, and possibly a short term win for the carmaker who was smart enough to licence Tesla tech.
 
So many good questions unasked and unanswered at Q2 CC....
Here's what I want to know....

  • Is "solving" FSD in 2023 contingent on having Dojo operational?
  • Elon was quoted recently as having said solving FSD was easier than previously thought...what is that in reference to?
  • Will we get a repeat of the steel ball vs window at the CT delivery event? Musk tweet from 2019 “Should have done steel ball on window, *then* sledgehammer the door. Next time..."
I've got plenty more...
He may have meant conceptually easier, but not necessarily computationally easier.

The call made it clear that they, Tesla, are spending heavily on AI compute resources now and going forward. Specifically, they could’ve (finally) wrapped some sort of evolutionary computations around their neural networks when training.

FWIW, I think they will have (or should have) also added some sort of location specificity by now since locales have different driving norms.

As I have stated for years, 2024 is and remains my target for actual FSD (meaning at least as good as humans on average for a large percentage, but not necessarily all, roads everywhere in the US ).
 
Last edited:
  • Like
  • Funny
Reactions: y_naught and Skryll
I also think that many of those whining about Elon "blathering" about autonomy on the quarterly calls now, will in the future be complaining—because they are out of position—when the discontinuity due to autonomy hits the stock price.

"But..but...but..he didn’t tell us."

"Why yes. Yes he did. On pretty much every opportunity and every call."
 
I wouldn't get too excited by that tweet. Green seems to live for the days when he can accuse tesla of screwing up. He also rushes out information based on features he sees in code that are not even active.
Yup. Green is knowledgeable, but he is a drama queen. He believes throwing pot shots at Tesla increases his legitimacy.
 
It's really not.

They ran out of compute on one node in mid-2020 and have had to spill over to use the second node ever since. See the full thread here for where Green mentions this has been the case since then and not getting better.

And that's a system VASTLY less capable than the L4 system promised to pre 3/19 buyers.

So the idea they'll somehow magically add a ton of functionality that currently does not exist at all and ALSO reduce required compute by at least 50% to fit back in a single node (since you need redundancy as per AI day presentation to run without a human in the car) is magical thinking unsupported by any evidence.







<citation required>

And ideally a citation that has more substance than "they'll somehow just make everything way more efficient *waves hands*"

Again, nobody knows how much compute is actually needed for L4 driving on a Tesla. Not Elon, and certainly not James Douma who AFAIK has even less visibility to Tesla code than Green does....and that will be true of everyone until they actually achieve it. Which they have not.





Douma is the guy who told us how they didn't need LIDAR but that RADAR was super useful to FSD and it totally made sense they used it.... right up until Tesla announced they're removing radar... at which point he did a Dave Lee video telling us how no, just kidding, Radar wasn't really super useful at all and it totally makes sense to remove it.

But we do know they're WAY past using all the compute available in a single HW3 node, even at L2, and have been for roughly 3 years now.

In fact, hilariously, Douma is one of the original sources for HW3 having run out of compute years ago:

Yes, they ran out of compute on one node. But we (apart from @Discoducky, maybe) don't know how much effort they have already put into optimizing the FSD code.

What I know is:
- Unoptimized code has massive potential for optimization.
- Once you have the working product. A much easier approach might become apparent.
- NN can be massively optimized by pruning the nets, without loosing much of the NN accuracy. ( but this might need a great effort not worth on a fast moving target).
- Tesla a few years ago bought up a leading company in the NN pruning space, what did they do with them?

Yes, there's a lot of hopium, that they manage to squeeze it into a single HW3 node in the end. But your absolute statements to the contrary aren't based on pure facts either..
 
Last edited:
I think there is some confusion on the topic of dojo compute vs in-car compute, which is understandable, but I think a red herring. Dojo can scale MASSIVELY to make it easier to analyze phenomenal data sets to build a super-accurate NN.
That doesn't make the NN necessarily bigger, and its the NN that runs in the car.
Its perfectly possible that Teslas dataset and their training compute could 10x or even 100x, and its still just provide the same size NN that could happily run on HW4 or even HW3.

These are different things.

To think about it another way: When humans read better science textbooks, derived from more accurate research, we get better at understanding the world. Our brains dont get bigger though :D.
 
Sum of two NN solving 2 halves of the problem is bigger than one NN solving the whole problem.
I know nothing specifically, but divide et impera is a standard practice when dealing with an extremely difficult problem you don't know how to solve, nor where the boundaries are.
I don't blame them for trying to understand smaller pieces before tackling it all together.
Maybe, technically, the right idea was to do it in one go, but there is a higher chance they would have learned stuff much more slowly.
 
  • Like
Reactions: PokerFJÆS