Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Yup. Green is knowledgeable, but he is a drama queen. He believes throwing pot shots at Tesla increases his legitimacy.

And a major dick. Tossing shade at Tesla for "increasing it's IP theft" while they simultaneously opened up NACS (so that anyone can use this open standard without paying a license fee) is the a total dick move. Never liked him before, like him less now.
 
I know nothing specifically, but divide et impera is a standard practice when dealing with an extremely difficult problem you don't know how to solve, nor where the boundaries are.
I don't blame them for trying to understand smaller pieces before tackling it all together.
Maybe, technically, the right idea was to do it in one go, but there is a higher chance they would have learned stuff much more slowly.
Of course, divide et empera is the right approach to start making some progress. ... solve the part you are able to solve now and leave the rest for the later.
After your reach a bunch of partial solutions you may see some similar sub-parts that could be replaced with a single sub-part or joined or whatever.

Optimization comes after solving a problem.
 
And a major dick. Tossing shade at Tesla for "increasing it's IP theft" while they simultaneously opened up NACS (so that anyone can use this open standard without paying a license fee) is the a total dick move. Never liked him before, like him less now.
Sorry, but sharing your IP (to improve your own ecosystem) doesn't equalize breaking the terms of use for other IP you're using yourself.

It's a fact that Tesla (like many other companies) is breaking the terms of open source software.
Doesn't really matter who's pointing it out...

AFAIK they would have to do just two basic things to comply:
- Provide a list of all the open source software used in the car together with their licenses
- upon request provide all modifications to those applications.

I can't see any reason not to provide the first one.
And appart from security and HW specific changes (which might not even be affected) there's little reason to ignore the second...


It's essentially a more forgiving version of Teslas own fair use policy.
 
Last edited:
As far as I am aware not officially.

This article is useful:-
That article, and the Q2 slide deck would both appear to under-represent some of the current vehicle assembly capacities (end Q2-23) that sum to > 2,025,000/yr.

1690190697580.png


Shanghai exceeded 230k/qtr in both Q4-22 and Q2-23. The 9-month average for Shanghai production appears to be 232k/qtr after including a slow Q1-23, i.e. 930k/yr. Actual production for the last 12-months was 895k. The ">" symbol in the slidedeck is doing some heavy lifting.

On the flip side the stated capacities for Berlin and Texas appear to be assuming there is no cell supply constraint. There are periodic high production rate bursts that get celebrated as milestones (e.g. 5k/wk in Berlin 25-Mar-23 ) and these seem designed to test out the lines/teams for the future situation as cell supply progressively increases. The stated volume for Berlin of 375k implies at least 7.5k/wk, yet they don't have the fourth shift running. So the three shifts can't yet reach 6k/wk unless they do Saturdays which the tea leaves suggest they don't yet have cell supply for. Apparently Tesla are about to apply to German government to lift the production licence volume from 500k/yr to 1m/yr so that suggests Tesla either expect the licence to take a while to grant, or for cell supply to improve soon. (but I cannot remember where I saw this info) Personally I think Berlin is intended to go to 2m/yr in due course.

 
Last edited:
Yes, they ran out of compute on one node.
They will use all the compute they have, scale their NN for best performance which generally is giving it as much compute as they have. If they had HW5 they would use all the compute also, ie "running out of compute".

Using both chips is clever. In order to get high safety level they need redundancy. If one chip fails they need to be able to drive with the other chip. But why not use both chips when both chips are working? If one chip fails, they can switch to a half sized backup network or run the full network at half the frame rate while they slow down and stop. They big risk is going totally blind with a high risk of a catastrophic accident, not to inconvenience the driver by stopping the car when a chip has died or just having slightly higher accident risk the few times a chip has died.
 
Why are you trying to dodge the real issue? greenthelonely is stealing TESLA's IP by operating FSD software against the terms of it's license agreement (being a hacker doesn't justify breaking IP laws).

Just because he's too insignificant a bug to swat doesn't make him a folk hero.
Lots of off topic possibilities here so maybe, as a non moderator, I might suggest we move Green's hacking and Teslas misuse of IP to a different thread. Which would mean we all just drop this interesting topic or move it to one of the FSD threads.
 
Interesting Twitter post from Chamath just now regarding his POSITIVE reaction to 2Q results:


Uses anything for his pump and dump, uses TSLA and Elon too.
In 2021, while he was pumping, he dumped all his TSLA.
It would have been decent, if he stopped pumping at least while he was selling.
I don't even want to get into his SPACs pump and dump. Retails who invested in his SPACs were robbed by his scheme.
 
Yes, there's a lot of hopium, that they manage to squeeze it into a single HW3 node in the end. But your absolute statements to the contrary aren't based on pure facts either..


My only "absolute" statement was they ran out of single node compute on HW3 several years ago, while still not having a lot of the basic functionality L4 would require.

That is 100% pure fact- and I provided citations for it.

Your entire reply is then hopium about how Tesla can change those facts someday but is mostly made up of handwaving and magical thinking they'll somehow discover some way to not only make existing functionality massively more efficient (a shrink of roughly 50%) but ALSO add code that provides the slew of features and capabilities the system doesn't even have AT ALL today (a completed OEDR to name just one of many) and also fit all of that in the same reduced-by-half space too.


In any event- markets open in 30 minutes- highly suggest you take any further discussion over here, where the topic has already been beat to death many times-
 
Sorry, but sharing your IP (to improve your own ecosystem) doesn't equalize breaking the terms of use for other IP you're using yourself.

It's a fact that Tesla (like many other companies) is breaking the terms of open source software.
Doesn't really matter who's pointing it out...

AFAIK they would have to do just two basic things to comply:
- Provide a list of all the open source software used in the car together with their licenses
- upon request provide all modifications to those applications.

I can't see any reason not to provide the first one.
And appart from security and HW specific changes (which might not even be affected) there's little reason to ignore the second...


It's essentially a more forgiving version of Teslas own fair use policy.
Unless long term the play to bootstrap their way out of open source software and removing the modified version interim would set them back.

Which may sound like a hypothetical but that’s kinda what happened with mobile eye and tesla, I relize mobile eye is not open source but conceptually it’s the same situation.
 
Say your inference computer can process N weights per time slice
You divide the problem into 4 steps of size A,B,C, and D
You have M pieces of training data
They are subdivided into sets for each problem step: a,b,c,d
For ease of discussion, assume all sets and NN are the same size: A=B=C=D=N/4 and a=b=c=d=M/4
When you train, you run training 4 times, once for each sub step. Say it takes 100 Million rounds for a good output.
Total training: 4 steps * 100 million * M/4 cases * N/4 weights = 1/4 * 100 million * M*N or
25 million * N * M

Then you realize breaking the problem up into discrete steps looses a lot of context and you would be better off with one full sized net training on all the data.
1 step * 100 million * M cases * N weights = 100 million * N * M
Presto, you just quadrupled the amount of training compute needed.
HOWEVER!
Remember how your training data was case specific and you needed 100 million runs of that subset of cases on a subset of the full NN to get good results? Yeah, that didn't go away. You are now tweaking 4x the weights each run and may need to run each test case 4x (or 16x) more times to move the parameters sufficiently (due to the adjustment coefficient changing and number of parameters impacted per run)
Say it's only 4x more, now your compute requirements are 16 times greater than before. if 16x, that's a 64x total increase.
Even at 4x due purely to unification, that's a lot more compute for the same training speed and further additional cases need 4x additional compute than before. Plus, you can no longer just split the task into 4 independent cluster of training.

Time to enter the Dojo.
 
So the idea they'll somehow magically add a ton of functionality that currently does not exist at all and ALSO reduce required compute by at least 50% to fit back in a single node (since you need redundancy as per AI day presentation to run without a human in the car) is magical thinking unsupported by any evidence.
Setting the bar at full lock step NN redundancy may not be a valid limiting criteria.

Setting aside the FSD computer, existing cars lack hardware redundancy. Steering racks lost their redundant control circuitry during the chip shortage, for example.
The HW3 computer itself lacks full camera input redundancy. A-B swapping or injected test data can reveal chip issues and a limp to safety NN designed around the still functional hardware could be used instead of the full version.
 
Say your inference computer can process N weights per time slice
You divide the problem into 4 steps of size A,B,C, and D
You have M pieces of training data
They are subdivided into sets for each problem step: a,b,c,d
For ease of discussion, assume all sets and NN are the same size: A=B=C=D=N/4 and a=b=c=d=M/4
When you train, you run training 4 times, once for each sub step. Say it takes 100 Million rounds for a good output.
Total training: 4 steps * 100 million * M/4 cases * N/4 weights = 1/4 * 100 million * M*N or
25 million * N * M

Then you realize breaking the problem up into discrete steps looses a lot of context and you would be better off with one full sized net training on all the data.
1 step * 100 million * M cases * N weights = 100 million * N * M
Presto, you just quadrupled the amount of training compute needed.
HOWEVER!
Remember how your training data was case specific and you needed 100 million runs of that subset of cases on a subset of the full NN to get good results? Yeah, that didn't go away. You are now tweaking 4x the weights each run and may need to run each test case 4x (or 16x) more times to move the parameters sufficiently (due to the adjustment coefficient changing and number of parameters impacted per run)
Say it's only 4x more, now your compute requirements are 16 times greater than before. if 16x, that's a 64x total increase.
Even at 4x due purely to unification, that's a lot more compute for the same training speed and further additional cases need 4x additional compute than before. Plus, you can no longer just split the task into 4 independent cluster of training.

Time to enter the Dojo.
time to take the post to the relevant thread...nice post though
 
Also, its one hell of a milestone that it can even be a debate. Tesla aren't claiming they made a popular EV, but a popular car, and nobody finds that strange any longer.
I think it will be a fairer comparison when Tesla are selling in serious numbers in India and Africa. Toyota have had a long time to build out a global sales network, Tesla is still entering new markets. Also I have no doubt the poor sods in toyotas PR department have been furiously emailing every publication they can find to try and find a way to talk down Tesla and talk up toyota. I guess it makes a change from lying about 'self-charging' hybrids or pretending they have new unobtainium-powered batteries coming any day now...
 
Along with @Knightshade's articles, Panasonic 4680 is a different design than Tesla's. Theirs uses 5 tabs and Panasonic does not have the DBE tech nor other improvements.
At the end of the day, isn't 4680 technically just a size in millimeters that could have any sort of chemistry/structure inside?
 

68 miles of underground tunnels and 80+ stations approved in Las Vegas.

Las Vegas Loop has had over 1.2M passengers since operation started 2 years ago. Also, max daily capacity so far has been 32k passengers.
 
At the end of the day, isn't 4680 technically just a size in millimeters that could have any sort of chemistry/structure inside?
Yes, and doesn't require DBE either, just dramatically reduces the manufacturing footprint and through-put

AFAIK the real big deal with 4680 is the tabless design for efficient electron transport, hence keeps cool, less expansion, less degradation, more power delivery

Nice recovery from open, didn't see any specific news for that, but I welcome it...