Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Luminar’s largest customer: Tesla

This site may earn commission on affiliate links.

spacecoin

Active Member
Jan 28, 2022
1,181
1,366
Europe
Tesla CEO Elon Musk has said that lidar sensors are a “crutch” for autonomous vehicles. But his company has bought so many from Luminar that Tesla is now the lidar-maker’s top customer.

Tesla accounted for “more than 10%” of Luminar’s revenue in the first quarter of 2024, or a little more than $2 million, the lidar-maker revealed Tuesday in its first-quarter earnings report.

Luminar reported that its revenue fell 5% from the fourth quarter of 2023, which it mostly attributed to “lower sensor sales to non-automotive customers.” That drop was “offset by sensor sales to Tesla, which was our largest lidar customer in Q1.” Luminar also noted a 45% gain in revenue year-over-year.


Tesla is Luminar's largest lidar customer — TechCrunch
 
To put this in context $2 million worth of lidar sounds like a lot, but it's not that many vehicles: only a couple hundred.

Per a quick Google search each sensor is about $1000 and Tesla's fleet validation vehicles look like they're running 8 lidars on the roof rack (based on pictures like these). That gets you to about 250 vehicles.

That feels like a bit more than they'd need for validation purposes to me though (it's enough to deploy a few in every major metro area in every market Tesla operates in). Perhaps it is for something more than the fleet validation they've done for years.
 
Article posted today on China Daily web site, states sources say that Tesla to test robotaxi service in China... China Daily Article

1715140082649.png
 
Last edited:
I will throw out a random uneducated guess, because I like to hear myself talk.

Perhaps it's to train the vision system in depth perception - the lidar can be used to automatically label the distance away objects are, and then as they move, to calculate their velocities. Tesla can get a lot of auto-labelled training data this way?
They pretty much do exactly that as Tesla test cars have been spotted with LIDAR systems mounted on top.

Which goes to show that vision will never be sufficient for true autonomy if they have to train the system with LIDAR since even with training vision can never be as precise as LIDAR or work in as many environments.
 
  • Like
Reactions: father_of_6 and KJD
They pretty much do exactly that as Tesla test cars have been spotted with LIDAR systems mounted on top.

Which goes to show that vision will never be sufficient for true autonomy if they have to train the system with LIDAR since even with training vision can never be as precise as LIDAR or work in as many environments.
It's ironic: the v11 occupancy network must be trained with LIDAR (because it's trying to directly compute real-world depth information, and needs a ground-truth source), but v12 skips the occupancy network altogether, so in principle it has no need for ground-truth depth or LIDAR for training the driving task.

The irony is that it's becoming more and more evident (at least to me) that Pure Vision will not succeed in reaching L4 by itself, at least not for another 8-10 years. Adding lidar + radars to the sensor suite could likely accelerate Tesla L4 by several years, by shoring up the inherent deficiencies of pure vision, which are mostly to do with difficult environmental conditions or compromised image quality, exacerbated by the camera lenses being immovable and pressed right up against the glass (unlike human heads).

So the idea that Tesla may add lidar + radar back to HW5 next year (and I expect they very likely will to Robotaxi), perhaps with a rationalization (to soothe Elon's ego) that NHTSA and/or the competition made them do it, or that today's state-of-the-art LIDAR is somehow completely different from previous LIDAR, seems to me like Tesla's most probable and promising path forward. The downside is that it may mean the current HW3/HW4 fleet will be left out in the cold and never achieve L4 autonomy or Robotaxi capability. (Unless by some miracle Tesla anticipated this far enough in advance to e.g. design LIDAR retrofittability into HW4, but I think that's a long shot.) 8/8 ought to be interesting; I'm really curious to find out exactly how many (and which) balls Tesla is pushing to the wall!
 
To put this in context $2 million worth of lidar sounds like a lot, but it's not that many vehicles: only a couple hundred.

Per a quick Google search each sensor is about $1000 and Tesla's fleet validation vehicles look like they're running 8 lidars on the roof rack (based on pictures like these). That gets you to about 250 vehicles.

That feels like a bit more than they'd need for validation purposes to me though (it's enough to deploy a few in every major metro area in every market Tesla operates in). Perhaps it is for something more than the fleet validation they've done for years.
If it's that little, they may even be for existing data gathering vehicles. As another mentioned Tesla had lidar equipped vehicles for years to gather data (basically to establish ground truth for the NN training).
 
It's ironic: the v11 occupancy network must be trained with LIDAR (because it's trying to directly compute real-world depth information, and needs a ground-truth source), but v12 skips the occupancy network altogether, so in principle it has no need for ground-truth depth or LIDAR for training the driving task.
My understanding is V12 still keeps the occupancy network, it's just that it gets feedback from the end planner action output (which makes it end-to-end). Basically Tesla is using modular end-to-end (not a true single black box).
The irony is that it's becoming more and more evident (at least to me) that Pure Vision will not succeed in reaching L4 by itself, at least not for another 8-10 years. Adding lidar + radars to the sensor suite could likely accelerate Tesla L4 by several years, by shoring up the inherent deficiencies of pure vision, which are mostly to do with difficult environmental conditions or compromised image quality, exacerbated by the camera lenses being immovable and pressed right up against the glass (unlike human heads).

So the idea that Tesla may add lidar + radar back to HW5 next year (and I expect they very likely will to Robotaxi), perhaps with a rationalization (to soothe Elon's ego) that NHTSA and/or the competition made them do it, or that today's state-of-the-art LIDAR is somehow completely different from previous LIDAR, seems to me like Tesla's most probable and promising path forward. The downside is that it may mean the current HW3/HW4 fleet will be left out in the cold and never achieve L4 autonomy or Robotaxi capability. (Unless by some miracle Tesla anticipated this far enough in advance to e.g. design LIDAR retrofittability into HW4, but I think that's a long shot.) 8/8 ought to be interesting; I'm really curious to find out exactly how many (and which) balls Tesla is pushing to the wall!
I'm not too optimistic about that. Elon could have had some crow eating with the HD radar, but so far Tesla has put zero use on that (anyone following that? Are any new cars even getting that HD radar any longer?).

If it's true that the volumes are as small as others mentioned, at most it may be used for that low volume bespoke dedicated robotaxi that presumably would be unveiled in August. Or if may even just be Tesla's internal data gathering vehicles used for training.
 
  • Like
Reactions: Ben W
To put this in context $2 million worth of lidar sounds like a lot, but it's not that many vehicles: only a couple hundred.

Per a quick Google search each sensor is about $1000 and Tesla's fleet validation vehicles look like they're running 8 lidars on the roof rack (based on pictures like these). That gets you to about 250 vehicles.

That feels like a bit more than they'd need for validation purposes to me though (it's enough to deploy a few in every major metro area in every market Tesla operates in). Perhaps it is for something more than the fleet validation they've done for years.

250 vehicles would be enough for a small robotaxi fleet. To put things in context, that is about the number of robotaxis that Waymo has in SF now. If these lidar are for the Tesla robotaxi, I think that would make sense. Tesla could be planning an initial production of ~250 robotaxis for the first ride-hailing service somewhere.
 
  • Like
Reactions: Ben W and enemji
250 vehicles would be enough for a small robotaxi fleet. To put things in context, that is about the number of robotaxis that Waymo has in SF now. If these lidar are for the Tesla robotaxi, I think that would make sense. Tesla could be planning an initial production of ~250 robotaxis for the first ride-hailing service somewhere.
My guess is that these lidars are primarily used for gathering ground-truth data to train the occupancy network; 8 lidars per vehicle would be massive overkill for the driving task itself. (Waymo evidently uses 29 cameras, 6 radars, and 4 lidars.) My guess is that production Robotaxi will probably use just one lidar, but that there will be a small fleet of ground-truth-gathering Robotaxi-form-factor vehicles with many lidars each.
 
  • Like
Reactions: enemji and KJD
My guess is that these lidars are primarily used for gathering ground-truth data to train the occupancy network.

No, Elon says that Tesla does not need to do ground truth validation anymore now that they have switched to V12. That is because end to end does not use occupancy networks. End to end also does not have a separate perception stack that requires validation. End to end takes in pixels and directly outputs controls.
 
Last edited:
My understanding is V12 still keeps the occupancy network, it's just that it gets feedback from the end planner action output (which makes it end-to-end). Basically Tesla is using modular end-to-end (not a true single black box).
This makes sense. Some amount of intermediate feature-engineering is still needed to construct a human-interpretable visual display, and to show things like precise distances to obstacles, and potential parking spots.
I'm not too optimistic about [HW5 having lidar+radar]. Elon could have had some crow eating with the HD radar, but so far Tesla has put zero use on that (anyone following that? Are any new cars even getting that HD radar any longer?).

If it's true that the volumes are as small as others mentioned, at most it may be used for that low volume bespoke dedicated robotaxi that presumably would be unveiled in August. Or if may even just be Tesla's internal data gathering vehicles used for training.
I don't think any Teslas were ever shipped with HD radar? I suspect Tesla's current lidars are only used for data-gathering vehicles, though I do expect production Robotaxi to contain a lidar or two. "Low-volume Robotaxi" seems like an oxymoron; Elon's goal seems to be to scale up production of them as fast as possible as soon as FSD(U) has been achieved. (And not much point putting them into production before that.)

My expectation is that they will give HW5 the same sensor suite as Robotaxi, and use HW5 3's and Y's to perfect FSD(U) and start the L4 approval process. As soon as they see light at the end of the FSD(U) tunnel (which I expect will take at least a year or two with HW5), THEN begin the Robotaxi production ramp at ludicrous speed. (Obviously iron out the kinks with the Robotaxi production process in advance, and do whatever they can to ensure a smooth ramp.)
 
Last edited:
No, Elon says that Tesla does not need to do ground truth validation anymore now that they have switched to V12. That is because end to end does not use occupancy networks. End to end also does not have a separate perception stack that requires validation. End to end takes in pixels and directly outputs controls.
Can you point me to exactly what Elon said about this? A monolithic end-to-end network would not need ground truth validation for just the driving task, true, but others have asserted that the v12 stack is still at least somewhat modular, with intermediate feature-engineered layers that still need ground truth, e.g. for constructing the visual FSD display, although the modules can still be jointly trained end-to-end for pixels-in-control-out.
 
This makes sense. Some amount of intermediate feature-engineering is still needed to construct a human-interpretable visual display, and to show things like precise distances to obstacles, and potential parking spots.

I don't think any Teslas were ever shipped with HD radar?
The HW4 Model S/X definitely shipped with HD radar when it launched. I'm not sure however if they continued to do that as time passed and they did nothing with it. Model 3/Y didn't get it however.
I suspect Tesla's current lidars are only used for data-gathering vehicles, though I do expect production Robotaxi to contain a lidar or two. "Low-volume Robotaxi" seems like an oxymoron; Elon's goal seems to be to scale up production of them as fast as possible as soon as FSD(U) has been achieved. (And not much point putting them into production before that.)

My expectation is that they will give HW5 the same sensor suite as Robotaxi, and use HW5 3's and Y's to perfect FSD(U) and start the L4 approval process. As soon as they see light at the end of the FSD(U) tunnel (which I expect will take at least a year or two with HW5), THEN begin the Robotaxi production ramp at ludicrous speed. (Obviously iron out the kinks with the Robotaxi production process in advance, and do whatever they can to ensure a smooth ramp.)
The initial plan from a few years ago was for Model 2 to be both high volume and "robotaxi". But recently the rumorville is that Elon would do a separate dedicated robotaxi (to be unveiled August) and abandon the high volume Model 2, which led to back and forth about Reuters lying, and eventually assurances that Model 2 development was going to continue.
 
  • Informative
Reactions: Ben W
Can you point me to exactly what Elon said about this? A monolithic end-to-end network would not need ground truth validation for just the driving task, true, but others have asserted that the v12 stack is still at least somewhat modular, with intermediate feature-engineered layers that still need ground truth, e.g. for constructing the visual FSD display, although the modules can still be jointly trained end-to-end for pixels-in-control-out.
 
  • Informative
Reactions: enemji and Ben W
This begs the question of why Tesla nevertheless purchased $2.1 million of Lidar in Q1 2024, if they supposedly don't need them. Is it just a hedge against Elon potentially being wrong about this?

My hunch is that "need" is being used in a fairly extreme way here, in the sense that I don't "need" coffee, or wifi, or underwear. I don't doubt that Tesla could eventually theoretically solve FSD (U) with pure vision, but I think they could solve it much sooner with vision+lidar+radar (which is now far more economical than when they started working on autonomy in 2016), and I suspect they're beginning to realize that.
 
Last edited:
  • Like
Reactions: strider
This begs the question of why Tesla nevertheless purchased $2.1 million of Lidar in Q1 2024, if they supposedly don't need them.

My hunch is that "need" is being used in a fairly extreme way here, in the sense that I don't "need" coffee, or wifi, or underwear. I don't doubt that Tesla could eventually theoretically solve FSD (U) with pure vision, but I think they could solve it much sooner with vision+lidar+radar (which is now far more economical than when they started working on autonomy in 2016), and I suspect they're beginning to realize that.
Exactly. It's a classic Elon non-denial denial. He's going to need something more than cameras if he's serious about autonomy this decade. It's really that simple.