Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
One less thing to deal with…

Odds on the media reporting on this now that it's over, as opposed to the coverage when it started?
 
Yeah, Tesla should be worried:
The goal with EyeQ Ultra is to provide a package to be introduced into robo-taxis and regular consumer vehicles in the 2025 timeframe, at a total cost of $5000-$10000 for a system-level power of around 100 W.
For reference, Tesla claims 72W on their HW3.0 computer with 2 FSD chips (36W each).


The EyeQ Ultra is supposed to be as powerful as 10 EyeQ5's, but in a single chip:

The MobilEye CEO says that both TOPS and TOPS per watt are misleading to assess the performance of their chips... 🤷‍♂️
 
For reference, Tesla claims 72W on their HW3.0 computer with 2 FSD chips (36W each).


The EyeQ Ultra is supposed to be as powerful as 10 EyeQ5's, but in a single chip:

The MobilEye CEO says that both TOPS and TOPS per watt are misleading to assess the performance of their chips... 🤷‍♂️
Don’t doubt that their system that might go into production in 2025 will beat the 2019 Tesla HW3. But it’s not like Tesla are intending to stand still. Having a high quality large dataset of varied and tricky situations is probably a lot more important than raw compute.
 
Don’t doubt that their system that might go into production in 2025 will beat the 2019 Tesla HW3. But it’s not like Tesla are intending to stand still. Having a high quality large dataset of varied and tricky situations is probably a lot more important than raw compute.
I do think they're ahead of a lot of other people, but I do wonder why they keep peddling LIDAR and radar if their main driving system is pure vision.

Sure, I understand that it's hard to have a fully redundant system without having each camera duplicated and it seems they are looking to use LIDAR/radar as redundancy, but LIDAR/radar can never be as good in terms of giving context than vision, so it's a fools' errand anyway.

But anyway, there are better topics than this investor topic to discuss MobilEye/LIDAR.
 
For reference, Tesla claims 72W on their HW3.0 computer with 2 FSD chips (36W each).


The EyeQ Ultra is supposed to be as powerful as 10 EyeQ5's, but in a single chip:

The MobilEye CEO says that both TOPS and TOPS per watt are misleading to assess the performance of their chips... 🤷‍♂️
These guys are still stuck with traditional CV, he’s talking about “algorithm” to label pixels of road surface, another “algorithm” to generate 3D point clouds, then yet another “algorithm” to identify objects in there…

If these “algorithms” are neuro nets, their chips are clearly under powered. If not, good luck with that dead end. Past decade had already taught us where CV is heading, anyone disagrees need to win back Image Net competition first.

To me it sounds like they are targeting L4 in clearly marked highway only, and was not even attempting to solve generalized L5 FSD.

tl;dr: Not a competition to Tesla, not even close.
 
Tesla does not carry much debt for a company of its size. If they were to paydown debt, it would likely be the $1.8B Credit Agreement but my hunch is they won't.
This is, in my opinion, the most serious analyst miss, for both bulls and bears.
Fir some reason they seem to misunderstand FCF and every liquidity measure.
Tesla has regularly explained their cash conversion cycle, that staple rating agency calculation.
I don’t know but I think there are two non-nefarious reasons. First, In general people still believe that Tesla is profitable only from subsidies. Second, Gradual loss of market share is assumed since traditional OEM’s are so much better at manufacturing /s

The core problem apart from those is that keeping up with Tesla business is hard work. Gigapress, direct sales, structural battery packs, octovalves…none of those are easy.

Several of us including @The Accountant , are quite adept at finding reality in a wealth of dross.
Moral: it’s not hard to stay long when you know what is reality. If reality changes, we’ll see it quickly.
 
I do think they're ahead of a lot of other people, but I do wonder why they keep peddling LIDAR and radar if their main driving system is pure vision.

Sure, I understand that it's hard to have a fully redundant system without having each camera duplicated and it seems they are looking to use LIDAR/radar as redundancy, but LIDAR/radar can never be as good in terms of giving context than vision, so it's a fools' errand anyway.

But anyway, there are better topics than this investor topic to discuss MobilEye/LIDAR.

Last I heard, MobilEye does not know of a way to achieve the required safety levels through pure vision, hence they plan to combine a separate vision system and a separate LIDAR system.

Their belief is that if their vision system makes a mistake once every 1,000 hours and their LIDAR system also makes a mistake every 1,000 hours, their separate vision + separate LIDAR system will only make a mistake once every 1,000 * 1,000 = 1,000,000 hours 🤷‍♂️


Watch from 56:10 to 1:00:20
 
Given the rumors of imminent start of production in Texas, I thought I would have a look at the drone flyover videos to see if the Texas factory looks similarly ready as the Shanghai factory at the time of start of production.

This is giga Texas, yesterday January 4:
B53D5D5B-22C0-4FAB-ACEA-2B057D893A67.png


For comparison, here are some shots from Wuwa from October 15, 2019, the month giga Shanghai started production:
333EBCC3-0350-4AAD-8AFE-889A283F1620.png

79A3F839-E0F5-4A1B-AF52-47462CBF52EF.png

Shanghai factory looks a lot more finished, so I would be a bit surprised to hear Texas starting production this week.
 
Maybe. There would have to be a whole lot of recycled cell phone and laptop batteries for it to be very significant. When EV's are growing exponentially, recycling batteries from 10+ year old EV's will simply not provide a meaningful amount of material, relative to production volumes that have been growing exponentially for a decade or more. On the bright side, many of those older batteries will be particularly rich in certain elements (like cobalt) that have been drastically reduced in more modern formulations.

Of course, every little bit helps.
JB has explained the "supply" issue on recycled materials in quite some depth on interviews I've seen. For now consumer electronics waste, phones in particular, are where it's at (also laptops, computers, etc). He's a great guy and so I won't point to any particular video, just go and watch of few of his interviews and you'll see how they plan to proceed. First it is repurposing consumer electronic waste. Much better than shipping that waste stream to India or China to be dealt with in a process that simply produces more "leakage"- the monitoring and controls on waste aren't great. If we export waste like this we're basically exporting first world problems and that is an ethical issue.

They have been working hard to create processes that let Redwood compete with the cost of mined material. Go JB go. He's addressing so many problems with one great company.
 
Their belief is that if their vision system makes a mistake once every 1,000 hours and their LIDAR system also makes a mistake every 1,000 hours, their separate vision + separate LIDAR system will only make a mistake once every 1,000 * 1,000 = 1,000,000 hours 🤷‍♂️
That is only true as long as they KNOW which one is wrong in any given case, and how would they?
 
Last I heard, MobilEye does not know of a way to achieve the required safety levels through pure vision, hence they plan to combine a separate vision system and a separate LIDAR system.

Their belief is that if their vision system makes a mistake once every 1,000 hours and their LIDAR system also makes a mistake every 1,000 hours, their separate vision + separate LIDAR system will only make a mistake once every 1,000 * 1,000 = 1,000,000 hours 🤷‍♂️


Watch from 56:10 to 1:00:20
Bwhahaha ...

Elon said: If Radar & Vision disagree, who do you trust?
Mobileye: Well if ONE system says there is X, we act

Tesla so: well.. we removed radar because it caused too many false positives & phantom braking.

Mobileye will find that out eventually as well :D


Also: The calculation is ONLY correct if those two things are INDEPENDENT (also highlighted on the slides). But .. they capture the same scenery with the same things at the same time. And for Vision + Lidar even with the same physical method (Light) with all effects that corrolate with that 🤷‍♀️
 
Elon said: If Radar & Vision disagree, who do you trust?
The only correct way to do redundancy is to have 3-fold redundancy, and work with a vote. But that would mean each stack should be identical of the technology, since 2 stacks of vision and one stack of radar will never result in 3 identical results, nor will there be a chance where one stack of vision and one stack of radar will agree, and the 2nd stack of vision will disagree.
 
Last I heard, MobilEye does not know of a way to achieve the required safety levels through pure vision, hence they plan to combine a separate vision system and a separate LIDAR system.

Their belief is that if their vision system makes a mistake once every 1,000 hours and their LIDAR system also makes a mistake every 1,000 hours, their separate vision + separate LIDAR system will only make a mistake once every 1,000 * 1,000 = 1,000,000 hours 🤷‍♂️


Watch from 56:10 to 1:00:20
Here is the latest:

I am not impressed. In the drive they did the car was not very consistently centered and to be frank(pun intendend) they showed very easy scenarios. And the way they talk is so lacking anything that would make talent want to join them compared to Tesla AI Day.

7:50 the question from Herbert Diess is lol, would Elon ever ask that kind of question? And the answer is also so weird. What’s that all about?

9:58 he says that it would be impossible to drive without the map. I think Tesla can do it without a map.

13:05 you can see cars flickering out as they are occuluded showing they don’t have memory in the neural network as Tesla showed during AI day. Nor do I think they are doing camera fusion in the neural network as it struggles with larger vehicles visible over multiple cameras seen a few seconds later.

Calling their camera SuperVision is pretty lame given that in machine learning already uses the term supervised learning and the driver is supposed to supervise the system in lvl4.
 
Last edited:
Here is the latest:

I am not impressed. In the drive they did the car was not very consistently centered and to be frank(pun intendend) they showed very easy scenarios. And the way they talk is so lacking anything that would make talent want to join them compared to Tesla AI Day.

9:58 he says that it would be impossible to drive without the map. I think Tesla can do it without a map.

13:05 you can see cars flickering out as they are occuluded showing they don’t have memory in the neural network as Tesla showed during AI day. Not even sure if they do camera fusion in the neural network or not as it struggles with larger vehicles visible over multiple cameras.

Calling their camera SuperVision is pretty lame given that in machine learning already uses the term supervised learning and the driver is supposed to supervise the system in lvl4.

My guess is that the system will struggle with the same things Tesla FSD has struggled with over the last year such as unprotected left turn.
Bjorn Nyland bought a Model 3 in Thailand. It wasn't properly configured for Thailand at the start (Hong Kong import via big dealer).

Even so, autopilot was working without maps. Can't remember if FSD (unlikely I guess).