Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TSLA Market Action: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
I remember hearing that early train guages were based on cart widths and cart widths were based on grooves in the road and the earliest grooves were set by the Romans. Essentially a technology based in the 18th century relied on standards set the best part of 2k years ago.

And according to snopes it's not entirely bullshit.

The link you included says, in big bold blue letters, "FALSE".
 
And how would that work at 3 in the morning on the factory floor when Elon feels like sending out a tweet? Every device he has access to would need to route all his communications to someone else to review before it goes out to the world. I just don't see a realistic practical solution.
hpw about an app that delays your tweet for 15 minutes and requires your future self approval.
 
Good god, you're all light footed. My long term averages for '16 MS 60 are:
summer - 328 Wh/mile (204 Wh/km)
winter - 451 Wh/mile (280 Wh/km)
I do drive a lot in the inner city, rush hour, 5miles/hr traffic
Model 3 is more efficient than S or X. I do mixed highway/city but mostly highway and mostly at 70+ MPH. Occasionally completely floor the go pedal but not often (just a regular LR here not Performance). Lifetime average is 244Wh/mi over the past ~month.
 
  • Love
Reactions: Esme Es Mejor
Clarification. When I say FSD, it's the software that I purchased and whatever features that are included as they roll out over the next several years, including our new AI hardware at 1,000 fps in 2019'ish. So far, FSD does Summons is about all, but that's about to change in V9 (to some degree, TBD). Here's the explanation and EM own words. Tesla’s version 9 software update is coming in August with first ‘full self-driving features’, says Elon Musk

Please don't hit me for off topic. I think this will be the topic here soon as I forecast $TSLA. Just thinking ahead.

Well, that is either EAP (enhanced auto pilot) or you may have ALSO purchased FSD, for which there is nothing to show to date. Summons is enabled by EAP for all - if they have it. (almost all, there are legacy S with I think V2.0 AP that don't do summons)

V9 of EAP should bring more features to version 2.0 (and 2.5) AP hardware though, which has been lacking.

And, I think only people who paid 5K for EAP AND 3K for FSD (which does nothing at the moment) are apparently going to get any of the new reported hardware that is 10x better than current hardware at visual interpretation "for free". At the moment though that hasn't been confirmed.
 
  • Informative
Reactions: SpaceCash
Seems $300 is pretty solid. If no more drops by end of today, I'll start my buying back again in afterhours.

Yesterday was the first day when there was a serious indication that TSLA may have bottomed out from this nonsense dip

Yup, we're seeing the same thing from (your) Technical and (my) gut perspectives. I might not wait for closing actually, you just convinced me we're spot on.
 
If you want your money to double in AMZN, Don’t hold your breath. Going from 1 trillion to 2 trillion marked cap will be Almost impossible.

Comparatively, going from 50 billion to 100 billion will be very doable for Tesla if they execute.
Exactly. Did you know that Apple has added 1.5 Teslas since they hit 1T a couple of weeks ago.
 
  • Like
Reactions: SpaceCash
Oh... I assumed summons was part of FSD (it's driving itself, no?). I have both EAP and FSD so hard to separate and not clear on several other threads either. Anyway, watch for FSD V9 coming soon then the stock bump.

He's right - I have EAP, bit not FSD and summons works for me - have actually used it for real, not just for show too :)
 
  • Like
Reactions: SpaceCash
This was shared with me today by a former Tesla bull who became disenchanted with EM/Tesla/TSLA and moved his TSLA money into other stocks......I still hold a good sized position in TSLA but it does make one understand why diversification is a good idea:


Arun Chopra CFA CMT on Twitter

OK, fine, but given slightly different parameters, $TSLA could have been much higher and $AMZN could have flatlined.

It's all very easy in hind-sight.
 
  • Like
Reactions: SpaceCash and EinSV
This was shared with me today by a former Tesla bull who became disenchanted with EM/Tesla/TSLA and moved his TSLA money into other stocks......I still hold a good sized position in TSLA but it does make one understand why diversification is a good idea:


Arun Chopra CFA CMT on Twitter

Ah, the good 'ol "cherry pick the start date" technique...
 
This was shared with me today by a former Tesla bull who became disenchanted with EM/Tesla/TSLA and moved his TSLA money into other stocks......I still hold a good sized position in TSLA but it does make one understand why diversification is a good idea:


Arun Chopra CFA CMT on Twitter

It's true but all about framing. I might look very different in the next 5 years, and you could have choosen some time period where TSLA share price growth far exceeded AMZN.

TSLA is still a young company, AMZN is much more mature even though I agree they have some really huge growth potential still ahead of them. What I've read about the conditions for workers in AMZN facilites would worry me a lot as an investor.
 
Apple sells a bunch of phones and tablets, and they get to 1 Trillion. Amazon will sell everything, including possibly healthcare. 2T does not seem like an issue.

But Amazon doesn't make much money selling stuff, being a retailer.

They make money with services.

I don't think Amazon will have massive market share as a healthcare provider.
 
  • Like
Reactions: SpaceCash
After KrispyKreme's technical assault, I decided I Need to understand the electronics that handles the ai. So I started the search 2 days ago and finally found the thread with AP circuit board teardown and subsequent discussion today. Then took a deep dive into Nvidia's Parker (TA795SA-A2) general purpose gpu architecture and also the discreet Pascal GPU (GP106-510-KC) used. It was a deep dive, but I came out of it understanding that it is possible for Tesla to drop in a chip and enable Full Self driving. I will try to be as non technical and easy to understand as possible.

There were many experts chiming in both on this forum (for the sake of the person doing the teardown, will not link to it) and on Reddit's thread. However, not many have all the specialties required to see the full picture. I happen to have been an ASIC designer, firmware engineer, Machine Vision Application Engineer and in a jam, my managers have forced me to solder and troubleshoot PCBs so I know a bit of that. The only discipline I do not have is chip layout/routing and PCB layout/routing. (yes I jump ship every 3 years, jack of all trade and not an expert in any)

Here's my guess of how they use the hardware from looking at just the architecture.


Nvidia's Parker general purpose GPU is probably what Tesla intends to replace. Contained within is a multipurpose ARM Cpu and a small piece of its Discreet GPU. These GPUs have a bunch of units that process pixels in parallel and shove them into what they call Tensor units. Tensors units are programmable units that can be changed for other operations. I believe that these are used as the "Neurons" in neural network for decision making.
Actually, the Pascal GPU is most likely to be replaced if only one Nvidia element is replaced. However I would not be surprised if they replace both the Parker SoC and Pascal GPU with their ASIC, if it has an ARM core(s) or similar to handle the various necessary support tasks to keep the NN logic running.
Some GPU parts not needed for AI

Nvidia, probably in a rush to bring a product to market, did not really design a GPU specifically for Neural Network, instead they brought their gaming discreet gpu and stuffed it with some control logics and called it a day. The Processing part of their GPU has a lot of waste. Things specifically made for gaming and displaying images can probably be taken out completely. Also all the calculations can probably be narrowed down to int8 once the neural network is sufficiently trained and the results can be locked down. For simple pre-processing, drawing contours and recognizing an object in an image. a 8 bit black and white image will suffice and TSLA uses the red spectrum to do that. Currently, many calculations passes through 16 bit floating points and 32 bit floating points (most likely necessary for training the neural network). These are very expensive operations and take up a lot of spaces.
While it's true that you could simply remove much of what makes a GPU a GPU and use it as a more power efficient NN processor, that's not the optimal solution, and it doesn't sound like it's the path that Tesla took. It sounds like they built a design specifically tailored to not just NNs but the type of NNs they are going to run. There will be very little in common with a GPU design and a dedicated NN processor design, other than the broad strokes of it processing a lot of data in parallel very quickly.
AP 2.0 Model S, AP 2.0 Model 3, AP 2.5 Model 3

Model S tear down shows that it has 1 Parker general purpose and 1 discreet gpu. Model 3 tear down from Munroe shows 1 Parker general purpose and 2 discreet gpu. So there's some upgrade there. Then there's a potential AP 2.5 hardware showing up where extra connectors to a potential new board are present. In the future, there may be two board linked together to perform Full Self driving. My own guess is that it will evolve to two of the Nvidia board in parrallel for a while before Tesla have finished designing and testing their own chip. Both of the chips seems drop in replaceable as they both seems to sit on MXM. My guess is that there's a strong chance TSLA replaces both the General purpose and Discreet GPU.

I personally haven't done any calculation on how many chips is necessary to achieve full self driving. Potentially AP 2.0 Model S might have some trouble, but Tesla can just tapeout a chip with 7nm process for the older Model S which effectively doubles the amount of operations it can do. But this probably won't be possible if we stick to Nvidia solutions for the interim.
While building the upgrade as a MXM module is possible, I suspect they'll actually just swap out the whole module as it would probably be easier / faster / less training of service techs to do. Possibly they keep the old ones as spares for non-FSD cars or perhaps they remanufacture them at a central location to have the upgraded hardware by swapping out the MXM board, and then use them to upgrade other cars to FSD. I also expect that they'll only need one ASIC per vehicle, they will have sized it appropriately to begin with. Even at 14~16nm, a dedicated NN processor should have no trouble obtaining the needed performance while still being small enough to have high wafer production yields. Going with 7nm will mostly just lower the per-unit production costs by getting more dies from a wafer, though it will also improve power/performance.
COST

However, the question is, is it worth it. Nvidia is forging ahead with their new plex 2.0 platform. More power consumption more heat generated, but it'd save TSLA the R&D cost as well as the design and tape out cost + equipment for analysis. I wouldn't be surprised that it will cost 1 mil or 2 to tapeout the first batch of chip (plus ~3$ a chip) and the chip design team of around 20 person x 100k salary (probalby 200k + if TSLA is using the best) x 2 years of time + equipment. On top of that, Taping out is a very rigid process and cannot be modified too much on the fly. There's only so many Engineering Change order you can do before the mass production.

What is so bad with sticking with NVidia and just increase cooling and power consumption, which was the reason why KrispyKreme was attacking Tsla's automated driving electronics. That reasoning is lost to me.
While the upfront costs are probably at least several millions (a couple million initially and then a million or so per respin), Tesla is likely able to keep using this ASIC chip design for a decade or more and they will recoup their costs and then some quite easily.

Keep in mind that the desktop GTX1060 (one of several variants of the GP106 GPU) still sells for in the neighborhood of $200~$400 retail, and Nvidia isn't in the business of losing money. The MXM variants are harder to come by and are often more expensive by $100 or more just because they can be. Some of that is due to the high cost of RAM (thanks in part due to collusion among the RAM manufacturers to keep supply limited), some of that to other necessary things for the GPU board - these things are unlikely to get more than a little cheaper for the Tesla ASIC (possibly less RAM needed, possibly less power regulation needed). After dicing the wafer and removing bad dies, the likely per-die cost to Nvidia for the GP106 is somewhere below $30 (best I found on short notice was pricing from 2016 that put it just over $30, yields and such should have improved since then). RAM costs are probably around $50-70, the rest of the hardware is probably under $50. So per-unit cost is probably $100-150 or so.

That may not sound like a lot of room for cost savings, but I'd bet they can save at least $10-15 per ASIC (on account of being likely smaller and getting more usable from the wafer), another $25-35 on ram (using less of it), and another $10-20 on the rest of the supporting circuity (using less power, etc). Shaving off ~$50 before accounting for Nvidia's profit margin means that they can probably break even on the project after only 50-100k units (assuming 2~5$ million initial costs for tape-out, respins, etc). Depending on how much margin Nvidia was taking on top of that, could reach break even earlier than that. Perhaps a worst case scenario they break even at 500k units if they only save $10 between Nv margins and everything else. If they plan to ships millions of cars a year, and use the design for years, this is a no brainer long term cost reducer.
 
Note that on the conference call they mentioned that they have functional, tested, compatible field-test units already, and successfully tested them on all vehicle variants. This is what Pete Bannon said on the CC:

"My team is leading currently the Hardware 3 development. The chips are up and working, and we have drop-in replacements for S, X and 3, all have been driven in the field. They support the current networks running today in the car at full frame rates with a lot of idle cycles to spare. So, I think we're all really excited about what Andrej and his team will be able to do with this hardware in the future."
This suggests they are in a very advanced stage:
  • they already taped out the layout a couple of times and bootstrapped the AI chip, for a target process - for example for 14nm. They know who is making the chips and they have probably negotiated the exact volume pricing as well.
  • they have developed and tested the glue, the firmware, the interfaces and the host CPU software (x86 - see below).
  • they have a drop-in computing blade for all relevant vehicle platforms
Note that they have very narrow compatibility constraints, because they have full control of the entire software environment, which probably sped up the R&D and productization process significantly - just 2-3 years since late 2015 is super fast for an entirely new chip.

My other guess is that they eliminated the ARM aspect of Nvidia's blade entirely, and are interfacing the x86 host CPUs (which I suspect runs the main vehicle control loop and Autopilot logic) to the AI chip directly. But this is very speculative, just based on the probable design of their system - and my guess could be wrong: it would cost Tesla very little to license a generic ARM core and integrate it into their AI-chip (Pete Bannon has done that at Apple) - but maybe they avoided even that step.

They're most likely not even running the chips at full potential performance yet as usually the first couple of iterations of your design have issues with yield, and it takes a couple of respins to nail it. Though I wouldn't expect a doubling or anything here, just another 5-20%.

As for switching from ARM to x86, I wouldn't bet on it. Maybe from ARM (cheap licensing) to RISC-V (zero cost licensing), but going to x86 doesn't make a lot of sense when you're already building custom silicon. They can put the ARM or RISC-V core(s) on the ASIC and eliminate both the Parker SoC and Pascal GPU, but if they go x86 they still must have two separate devices (or pay even more money for custom silicon from AMD with NN logic embedded into an x86 SoC - not impossible but likely not the optimal solution here). Plus most likely ARM or RISC-V have more than enough performance for the task of keeping the NNs fed and operating, x86 is overkill.

The x86 CPU on the newer AP HW cars I believe is running the UI/UX, not the AP decision making. I think that's still happening on the Parker SoC. It's possible that Parker isn't enough for FSD decision making, but that doesn't mean they would move to x86, you can get higher performing ARM cores and they may move that into the NN ASIC itself anyways.
 
I remember hearing that early train guages were based on cart widths and cart widths were based on grooves in the road and the earliest grooves were set by the Romans. Essentially a technology based in the 18th century relied on standards set the best part of 2k years ago.

And according to snopes it's not entirely bullshit.
Cart widths were set by having two horses/cows/goats/ox walk next to each other....
 
  • Like
Reactions: replicant
Agree, as a 'weak' long ;)
I may add software practice, logistics to their weak points. Rush of delivery did cause some pissed off customers. We learnt on twitter the captain imposed iron rule on quality control before delivery. This is definitely the right thing to do and positive long term, it may cause lower delivery numbers short term.

All these things are not easy to get right. But they have examples to learn from, existing best practice to follow.

Inventing the future, as they are doing now, is much harder.

don’t forget communications with customers. although this becomes markedly easier as their delivery logistics system begins to take hold.
right now it’s still all over the place (i myself have 2 different stories from 2 separate delivery specialists)...still growing pains.
 
Last edited:
It's on sale, couldn't wait. Will get more if lower Tues then I'll unload a bit on the way up. I place orders in advance - action is too fast and I don't watch the candles. All gut + knowledge of Tesla/M3 here but I really did expect lower prices this week.

8-31-2018 12-12-52 PM.png


I too thought TSLAP was a sure thing, but still didn't go all in because (as I've shared) we're in a delicate market overall + trade wars, earthquakes, meteor, EM health... What do you know, his health did come up.
 
Status
Not open for further replies.