Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I read in the other Semi thread that Tesla recently bought several diesel Semis. Not good news.
Please prove me wrong.

I took the bait and looked in that thread for the reference linked above. Had to scroll back a page to find it, posted by someone who has consistently been championing diesel trucks in the Tesla Semi thread. Not that this is a bad thing. Diesel trucks are the predominant mode of transport for goods, worldwide, as are diesel powered trains.

This ends the TLDR section.

Perhaps you are unaware of the fact that Tesla manufactures automobiles? A LOT of them. People don't come to the factory to pick them up, rather, Tesla has a novel approach of bringing the car to a location near the buyer.

Did I mention how they make a bunch of these cars?

The process of transitioning to renewable energy is more than merely the throwing of a switch. There will be a period of time required to accomplish this. Look up the definition of "transition" if this needs further explanation. Many, including Elon, believe this will take decades to achieve. I've even heard it said that "prototyping is easy, and production is hard" in regard to the manufacturing aspect.

Clearly, it is possible for someone who hasn't been following the company to jump to the conclusion that as soon as a prototype is developed it can immediately be mass produced and distributed in numbers significant enough to displace all existing vehicles which do the same job. Maybe, you are one of those people?

If so, an exercise in something popularly called "reality" dictates that traditional methods of product delivery will continue to be the preferred method until those manufacturing an EV replacement get costs, infrastructure (charging, service, etc.), and production ramped to the levels needed to support it becoming the dominant choice.

In the case of auto manufacture and delivery the work will be performed by truck or train, and often a combination of both. Additionally, trucks will be used to bring supplies to manufacture the autos, as well as to move the trailers from the "Warehouse On Wheels" locations to the factory docks.

A simple application of what some term "common sense" would quickly reveal how Tesla will not be able to replace their several fleets of diesel trucks in any short time period based simply upon their having a working prototype that has been delivered to one customer for testing and development. Tesla is at that useful stage of R&D which is necessary before mass production begins where bugs are discovered and improvements are made.

Yet, despite this relatively easy concept based upon well-known metrics regarding all the many aspects that will go into transitioning from diesel to electric trucks, you avoid these considerations and ask for some other sort of proof?

Such an expectation of someone "proving you wrong" is an absurd notion.

An elementary examination of the myriad factors involved would provide more than sufficient evidence to explain why Tesla will very likely find it necessary to continue to buy diesel trucks to do this work for many years to come. Their production and delivery requirements are growing at a rate that exceeds all current BEV Truck manufacturing capabilities world wide.

It this too complex of a scenario to grok, in order to explain what prevents anyone from snapping their fingers and replacing all diesel trucks overnight?
 
Last edited:
ChatGTP4 paying user here. ChatGPT saves me so much time otherwise spent googling that the 20 dollar per month fee is a no-brainer. I use it while programming to look up api’s and code fragments that I don’t know by heart. It is starting to work so well that I let it generate larger and larger snippets, eliminating more and more google. Yes I have to verify and sometimes l have to correct, but that is a lot less work than looking everything up online (which itself was a lot less work compared to looking up API docs in paper books).

I notice a similarity with FSD and Autopilot here. They also have their flaws and limitations, but that doesn’t mean they’re useless. Even in the current state they perform useful work, but the required supervision is less effort than doing everything yourself.

After you posted this, I signed up for ChatGPT Plus to see what it's all about (already had Grok for 2 months).

ChatGPT is very impressive, but I don't see a mass audience for the current LLM approach yet:

1) Answers are painfully slow, especially if you want any current information, so it has to search the web (which takes 20-30 seconds)
2) It messes up with visual understanding
3) I asked all my friends if any of them use ChatGPT -> No
4) The answers are typically very boilerplate, no real insight
5) I was very bullish on it for 2 days because it was impressive, but then I realized all I was doing was "testing" it out vs really using it to help or entertain me
6) For information, I'd rather just look at the source vs having a potentially wrong summary given to me
7) For current information, I'd rather scroll through X because there's nothing that compares to the creativity and chaos of human thought / memes / etc.
8) The current LLM approach is very compute intensive, so I don't see this sort of model ever becoming low latency such that it becomes a conversational buddy who can respond within milliseconds (unless there's a different approach / breakthrough)
 
After you posted this, I signed up for ChatGPT Plus to see what it's all about (already had Grok for 2 months).

ChatGPT is very impressive, but I don't see a mass audience for the current LLM approach yet:

1) Answers are painfully slow, especially if you want any current information, so it has to search the web (which takes 20-30 seconds)
2) It messes up with visual understanding
3) I asked all my friends if any of them use ChatGPT -> No
4) The answers are typically very boilerplate, no real insight
5) I was very bullish on it for 2 days because it was impressive, but then I realized all I was doing was "testing" it out vs really using it to help or entertain me
6) For information, I'd rather just look at the source vs having a potentially wrong summary given to me
7) For current information, I'd rather scroll through X because there's nothing that compares to the creativity and chaos of human thought / memes / etc.
8) The current LLM approach is very compute intensive, so I don't see this sort of model ever becoming low latency such that it becomes a conversational buddy who can respond within milliseconds (unless there's a different approach / breakthrough)
Do you have a feel for how much better the subscription version is compared to the free version?

I use the free version (and Bard/Gemini) nearly daily and find them very useful. On topics I know well I always see mistakes so regardless of my knowledge on a topic I always use it more as a template to speed up and plan my own process. It's kind of funny how when you point an error out, or ask the LLM to double check, they always profusely apologize, but then provide a much better answer.
 
Do you have a feel for how much better the subscription version is compared to the free version?

I use the free version (and Bard/Gemini) nearly daily and find them very useful. On topics I know well I always see mistakes so regardless of my knowledge on a topic I always use it more as a template to speed up and plan my own process. It's kind of funny how when you point an error out, or ask the LLM to double check, they always profusely apologize, but then provide a much better answer.
So here’s a question for you? If you’re finding mistakes about topics you’re well-versed in, how are you determining what is true or false on topics you don’t know? And how is that unknowing what is true or false helpful? Does it not simply muddy the waters?

Ok, more than one question. I simply don’t understand why anyone would purposely go to a source that they know provides incorrect information to garner information about something they don’t know about. It’s confusing to me. It would be like reading WSJ/Reuters online articles to get accurate Tesla information.
 
Do you have a feel for how much better the subscription version is compared to the free version?

I use the free version (and Bard/Gemini) nearly daily and find them very useful. On topics I know well I always see mistakes so regardless of my knowledge on a topic I always use it more as a template to speed up and plan my own process. It's kind of funny how when you point an error out, or ask the LLM to double check, they always profusely apologize, but then provide a much better answer.

The Plus version is way better in my opinion because it can search the web and also understand pics from your camera / gallery. ChatGPT4 has much more accurate / impressive / informative without being wrong outputs.

To make this relate to Tesla, I use FSD V11 every time I drive my car :) And I use FSD because I love using it. I can't live without it. I currently don't feel the same way about ChatGPT. It might be my personality because I don't really trust in second-hand information; I like looking at the source material if it's something important.
 
After you posted this, I signed up for ChatGPT Plus to see what it's all about (already had Grok for 2 months).

ChatGPT is very impressive, but I don't see a mass audience for the current LLM approach yet:

1) Answers are painfully slow, especially if you want any current information, so it has to search the web (which takes 20-30 seconds)
2) It messes up with visual understanding
3) I asked all my friends if any of them use ChatGPT -> No
4) The answers are typically very boilerplate, no real insight
5) I was very bullish on it for 2 days because it was impressive, but then I realized all I was doing was "testing" it out vs really using it to help or entertain me
6) For information, I'd rather just look at the source vs having a potentially wrong summary given to me
7) For current information, I'd rather scroll through X because there's nothing that compares to the creativity and chaos of human thought / memes / etc.
8) The current LLM approach is very compute intensive, so I don't see this sort of model ever becoming low latency such that it becomes a conversational buddy who can respond within milliseconds (unless there's a different approach / breakthrough)
ChatGPT has 180 million active users. I don't know anyone who doesn't use it, but just like your friends don't, it's an irrelevant datapoint.

Another anecdotal datapoint; My wife and I use it daily. Me for coding purposes and her for a list of things like e-mails, policy re-writes, and contracts.

Neither of us use it like Google, ever.
 
So here’s a question for you? If you’re finding mistakes about topics you’re well-versed in, how are you determining what is true or false on topics you don’t know? And how is that unknowing what is true or false helpful? Does it not simply muddy the waters?

Ok, more than one question. I simply don’t understand why anyone would purposely go to a source that they know provides incorrect information to garner information about something they don’t know about. It’s confusing to me. It would be like reading WSJ/Reuters online articles to get accurate Tesla information.
Verification.

These services write and organize better than I do so I use them as a model for my writing.
 
So here’s a question for you? If you’re finding mistakes about topics you’re well-versed in, how are you determining what is true or false on topics you don’t know? And how is that unknowing what is true or false helpful? Does it not simply muddy the waters?

Ok, more than one question. I simply don’t understand why anyone would purposely go to a source that they know provides incorrect information to garner information about something they don’t know about. It’s confusing to me. It would be like reading WSJ/Reuters online articles to get accurate Tesla information.
TLDR: Because it is contextual, each query can build on the previous one. (Baby and Bathwater) also, afaik and hope that the LLM isn't biased, just stupid.

I am testing chat/LLM (Grok (2 wks) and chatgpt3.5 (weeks) and 4.0 (days). They outright lie (Grok), misinform (chat3.5) and merely apologize when caught. It is possible to use them (esp CoPilot) to create python scripts that work (PWM fan control, fibonacci spiral of points csv file creation for example, even though I don't know the language or have the ability to program in python.)

The python programming experience informed me that LLM or contextual search can work exceptionally well and I have seen statistics that indicate a very large impact it is having on productivity re software creation. ($SMCI stock).

Often the quality of the answer is based on the how well the question is posed, and how the answer is vetted. Python scripts (simple ones) are easy to verify, contextual error messages result in a faster way to debug with the answer is often complete code that can be tested immediately.
 
With Max Pain at $195
1707758691309.png
 
You're assuming that, and jumping to an unsupported conclusion.. Tesla made no such statement about when Model Y will be updated at Shanghai and/or Berlin. There is no actual connection between Model Y and Model 3 refresh programs except in your mind.
1. Rumors that Shanghai will switch over to refreshed Y this year.
2. Tesla states via this "email" no new refreshed Y this year in the US.
3. Hmmmm. just like Highland 3 strategy

You can play semantics game all you want, just stating IMHO (there fixed it) the debuts appear to be similar.
 
Last edited:
  • Like
Reactions: petit_bateau
I don’t understand what point you’re trying to make.

It’s good the email got leaked about the refreshed Model Y not being available in NA in 2024? Or it’s bad it got leaked? Or something else?
Who cares if it was leaked. It is the subject of the email. One can surmise, guess, or have an opinion the refreshed Y will follow Highland 3 debut. Overseers first, then North America.
 
Verification.

These services write and organize better than I do so I use them as a model for my writing.
Sorry to still be confused. I’m still trying to understand. How can you use it for verification if you don’t know what it’s saying is factual or not? For topics you know about, ok. I guess. Though why do you need verification about stuff you already know about? Isn’t knowing, knowing? And using it for verification of topics you don’t know about seems impossible since you don’t know if it’s accurate or not. I must be missing something, perhaps because I’ve never used it. I don’t even know how it would be helpful to me in my life.

I do understand your point about writing style and organization within that writing style being helpful in situations. Do you just use the framework of the writing and pop in your own words and data? It’s not a form of plagiarism?
 
Selling TSLA to buy BTC. I suspect that many Tesla investors are also Bitcoin investors. (I am.) Wouldn't surprise me if some have been selling some TSLA to buy more BTC.
I know you are joking but Elon has just sent up the batbtc signal.
My mind goes to Dogecoin...

Or could be Tesla - Elon frustrated nobody has cracked his t shirt puzzle! Or that nobody cares enough to even talk about it.

My theory is that the only gigacastings on Redwood will be the unboxed sides.
 
  • Informative
Reactions: Mike Ambler
TLDR: Because it is contextual, each query can build on the previous one. (Baby and Bathwater) also, afaik and hope that the LLM isn't biased, just stupid.

I am testing chat/LLM (Grok (2 wks) and chatgpt3.5 (weeks) and 4.0 (days). They outright lie (Grok), misinform (chat3.5) and merely apologize when caught. It is possible to use them (esp CoPilot) to create python scripts that work (PWM fan control, fibonacci spiral of points csv file creation for example, even though I don't know the language or have the ability to program in python.)

The python programming experience informed me that LLM or contextual search can work exceptionally well and I have seen statistics that indicate a very large impact it is having on productivity re software creation. ($SMCI stock).

Often the quality of the answer is based on the how well the question is posed, and how the answer is vetted. Python scripts (simple ones) are easy to verify, contextual error messages result in a faster way to debug with the answer is often complete code that can be tested immediately.
Thank you. Still don’t get it, but I appreciate your attempt to unconfuse me.
 
  • Like
Reactions: carterm2
Interesting dichotomy- They criticize Elon and yet state Tesla would not be where it is without him.


CRAIG IRWIN: No. I think Musk is essential for the valuation. He's essential for the equity following of retail investors and he's been the visionary. You talk to senior executives out at Tesla, senior engineers out of Tesla and they say Musk is absolutely impossible to please. He is unstoppable. He will not take no for an answer. And that's how he's getting these tremendous results years faster than anyone else in the industry.

He's a very demanding CEO and extremely intelligent about the way he looks at some of these fundamental problems that people just assume are unsolvable or that the conventional solutions, the right solution. Musk is Tesla, that is the challenge here over the next many years. How do you backfill him? I think if Musk was gone for Tesla, the valuation would change radically in a very short period of time.
 
Who cares if it was leaked. It is the subject of the email. One can surmise, guess, or have an opinion the refreshed Y will follow Highland 3 debut. Overseers first, then North America.
Obviously, I do care about people within Tesla leaking information that the company doesn’t want leaked. Those people should be found out and punished.

In this case I didn’t understand what you were saying about any of it and I still don’t. I was asking for you to restate your position/opinion/speculation/criticism in a manner I could understand.
 
  • Like
Reactions: Webeevdrivers
Sorry to still be confused. I’m still trying to understand. How can you use it for verification if you don’t know what it’s saying is factual or not? For topics you know about, ok. I guess. Though why do you need verification about stuff you already know about? Isn’t knowing, knowing? And using it for verification of topics you don’t know about seems impossible since you don’t know if it’s accurate or not. I must be missing something, perhaps because I’ve never used it. I don’t even know how it would be helpful to me in my life.

I do understand your point about writing style and organization within that writing style being helpful in situations. Do you just use the framework of the writing and pop in your own words and data? It’s not a form of plagiarism?
I doubt the way I do it crosses any lines. It doesn't really matter though, I'm not publishing anything, it's just intercompany communications. Regarding verifications, I just use Google to double check any information I'm not sure of.
 
  • Informative
Reactions: Krugerrand