Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
So LA Autoshow starts 22 Nov. Optimal date to release Pickup would therefore be ~20/11. 2 weeks notice to the launch party - invites go out ~6th Nov. Therefore Elon will be back on Twitter in 72 hours and the world will get back to normal.

I have plenty of "one more thing" ideas. My favourite being a Model 2 rolling off the back. <1% chance. I will spare you from the other ideas as they mostly involve multitudinous LIDAR up the wazoo.
I would rather not see another vehicle after the pickup. They should not show any others until the y and at least one other is in production. Hopefully the semi.
 
I would rather not see another vehicle after the pickup. They should not show any others until the y and at least one other is in production. Hopefully the semi.
I kind of feel the same way. But it would be super sweet if they showed off an early stage concept for an electric excavator, bulldozer and cement mixer, with a view to dominating construction and mining support vehicles, together with a mobile power pack station. And agricultural tractors. Churn those babies out and kill red diesel.

Caterpillar has a substantially higher market cap than Tesla, you just need to show a credible path to take away a good chunk of its market share to get a sustained bounce in valuation.
 
If you solve vision why do you need radar?

Because:
  • Radar sensors provide valuable, life saving physical information that cameras don't: they can sense through ~200 meters of fog, dust, rain and snow, at night. They can often "see through" the next car in front and detect a suddenly slowing car two cars ahead. LIDAR on the other hand is using single frequency photons that don't sense more than cameras and radars already do.
  • Radar sensors are also an order of magnitude less expensive than LIDAR.
If LIDAR units cost $10 each and had a power draw of 10 watts there's no doubt it might make sense to add them like ultrasonic sensors, for redundancy. But at $50,000+ (high end LIDARs), or even at $5,000 they'd be crowding out real safety measures.

FSD sensors for volume manufacturing of passenger cars must be selected based on cost/benefit analysis, not theoretical utility.

For example there's no doubt that a second, rear facing radar, or a secondary forward facing radar with a different frequency would improve overall safety - but radar sensor units are not that inexpensive yet.

Yes, but they started out with LIDAR to race in the DARPA Grand Challenge, because ~18 years ago the only way to get a high resolution, high FPS 3D map of the car's surroundings was LIDAR.

That "path dependent" LIDAR accident of history turned into a design and process failure they weren't able to get rid of yet.

Tesla's FSD efforts didn't have this historical baggage - they started from a clean slate in 2016 when Elon & his team realized that they could probably do FSD with 7 cameras, a bunch of ultrasonic sensors, accelerometers, GPS and two radar channels hooked up to an in-car power efficient supercomputer they designed for the purpose.

What amazes me is that despite Elon explaining this early on, none of the other major FSD projects is following Tesla's lead, they are stubbornly clinging to their LIDAR approaches - and by today it's probably too late already.

Yes. Which makes it questionable if Elon is right.

Not only is your argument a logical fallacy, there actually is one FSD competitor who is following Tesla's lead - Intel:

Here is MobilEye's EyeQ5 for example. 24 trillion operations per second

Intel's very latest chip might have the computing capacity - but they don't have Tesla's fleet size, nor the training data feedback loop.

All of these are essential to success if the FSD problem is "very complex" (as @ReflexFunds pointed it out), requiring tens of million of miles of training on hundreds of thousands of cars per neural network and driving software iteration, and billions of miles of training on over a million cars to reach "superhuman" levels of reliability - which I think it is.
 
One argument I’m missing in the Lidar vs. camera discussion: Do you want your car to look like a police car? This (Byton) is the cleanest Lidar setup I found so far and it still looks weird:

upload_2019-11-3_9-3-45.jpeg
 
Not sure how much experience you have with Lidars and snow, but Lidars sends out light and snow reflects light, so Lidars can see snow.

LIDAR seeing snow is literally the problem that I described. Snow forms random shapes on the ground, and constantly changes even the shape of the road itself. LIDAR cannot distinguish between when it's reflecting off snow, reflecting off a tree branch, or reflecting off a person laying on the ground; there is no colour or subtle pattern contrast. Ignoring the fact that resolution is too low for trying to use a neural net to pick out shapes, particularly between raster lines - even if it wasn't, it'd be endless pareidolia.

it is not a very different problem than deciding what is road and what is grass.

Something that LIDAR isn't used for either, apart from curb detection (curbs being another thing that readily disappears in the winter)

There is plenty of testing being done on snow:

Are you even reading what I'm writing? I literally pointed out that they're testing in northern places, and pointed out Waymo's winter testing ground in Michigan. What they're not doing is operating in northern locations. All of their operations are in warm climates, because LIDAR sucks in snow, in every respect.
 
LIDAR cannot distinguish between when it's reflecting off snow, reflecting off a tree branch, or reflecting off a person laying on the ground; there is no colour or subtle pattern contrast.

Have you done any lidar pointcloud filtering or what are you basing these statements on? A person looks very different to a tree branch. Here is a video from my youtube channel as an example, check 40seconds into this one:

If you can pick out that it’s a person and not a tree branch, then there is a signal that a neural network can pick up.

White snow and a tree branch have very different reflectivity and very different surfaces.

For some good primer on what Lidars can detect, 10min into this one:
 
Last edited:
Have you done any lidar pointcloud filtering or what are you basing these statements on? A person looks very different to a tree branch.

Show me where in your video you're showing, and I quote, "a tree branch... or a person laying on the ground"
Now show me where in your video you show snowdrifts and slush accumulation for comparison.

And since you want to bring up "Have you ever..." statements... while I've never worked on a self-driving car program (and imagine that you haven't either), I have worked with point clouds, and I imagine that you haven't. My entire previous job was working with voxel data, including surface fitting (marching cubes, etc), and I've worked with free point clouds for various home projects (photogrammetry, GIS data compression, etc). Not super relevant to the topic of "what fallen tree branches look like on LIDAR vs. accumulated slush", but unless you've worked on autonomous vehicle winter testing, I imagine you're no closer to the relevant narrow experence subset. Leaving us both in the same position: what are the two objects' 3d geometries, and what's the difference in their 3d geometries when viewed as a series of raster scans.

White snow and a tree branch have very different reflectivity and very different surfaces.

That is, of course, not how LIDAR is used; if it were used that way, you'd have LIDAR picking up lane lines and the like (ever see that in a Waymo LIDAR demo?). It's a research topic, but AFAIK nobody is actually using it. And more to the point, the problem that you face is that a beam reflecting off of a 90% reflectivity surface lying at an angle of 10° relative to the beam path yields the same intensity value (~0,156 * attenuated strength) as a beam reflecting off of a 20% reflectivity surface lying at an angle of ~51,3°.
 
Last edited:
Because:
  • Radar sensors provide valuable, life saving physical information that cameras don't: they can sense through ~200 meters of fog, dust, rain and snow, at night. They can often "see through" the next car in front and detect a suddenly slowing car two cars ahead. LIDAR on the other hand is using single frequency photons that don't sense more than cameras and radars already do.
  • Radar sensors are also an order of magnitude less expensive than LIDAR.
If LIDAR units cost $10 each and had a power draw of 10 watts there's no doubt it might make sense to add them like ultrasonic sensors, for redundancy. But at $50,000+ (high end LIDARs), or even at $5,000 they'd be crowding out real safety measures.

FSD sensors for volume manufacturing of passenger cars must be selected based on cost/benefit analysis, not theoretical utility.

For example there's no doubt that a second, rear facing radar, or a secondary forward facing radar with a different frequency would improve overall safety - but radar sensor units are not that inexpensive yet.





Not only is your argument a logical fallacy, there actually is one FSD competitor who is following Tesla's lead - Intel:



Intel's very latest chip might have the computing capacity - but they don't have Tesla's fleet size, nor the training data feedback loop.

All of these are essential to success if the FSD problem is "very complex" (as @ReflexFunds pointed it out), requiring tens of million of miles of training on hundreds of thousands of cars per neural network and driving software iteration, and billions of miles of training on over a million cars to reach "superhuman" levels of reliability - which I think it is.
What is the reason Tesla didn’t include inexpensive infrared in their camera suite? I’d have thought it was a useful extra sense beyond “eyes” and radar, given it would see pedestrians obscured from vision and engine signature.
 

Observations:

1. Much fewer trucks at the docks. Looks like the initial wave was to fill all the workstations, and from hereon, it'll just be to restock them as needed.

upload_2019-11-3_9-11-29.png


2. Obligatory zoom-in of the Keyfob Parking Area. More cars than previous. At least one appears to be unpainted, and there's a white car.

upload_2019-11-3_9-12-24.png


3. Lots of cars scattered randomly elsewhere, however - like this right outside of where they drive out of the factory.

upload_2019-11-3_9-13-7.png


4. This one looks like it's undergoing offroad testing:

upload_2019-11-3_9-13-33.png


5. Construction on the battery plant continues at quite a clip. Here they're getting ready to clad a wall.

upload_2019-11-3_9-14-22.png


6. Roof cladding is underway on the other side. Note that they've now concreted the second floor here as well.

upload_2019-11-3_9-14-51.png


7. They're bridging from the battery plant to the other side. I assume for coolant pipes? They've laid out pipes on the ground in the direction of the power conversion building, although they look way too large to be power conduits.

upload_2019-11-3_9-15-24.png


8. Not sure what they plan to do with all of these. The larger ones look too large and heavily built to be ventilation (although maybe?). Some sort of liquid tankage?

upload_2019-11-3_9-17-46.png


9. And lastly, general "greening" of the plant. I'm guessing that these are seed germination blankets?

upload_2019-11-3_9-19-29.png
 
One argument I’m missing in the Lidar vs. camera discussion: Do you want your car to look like a police car? This (Byton) is the cleanest Lidar setup I found so far and it still looks weird:

View attachment 472685

I think that image doesn't even show the full ugly, they apparently also have these side protrusions:

byton-k-byte-5.png

But I didn't raise this because I wanted to give LIDAR the benefit of the doubt and not argue aesthetics.

Also note the price effect of Byton's design: two LIDAR units on the top of the car (forward and backward facing one), and two on each side of the car. This quadruples the cost compared to the single unit designs, plus exposes side LIDARs to the occlusion effect of numerous objects that are below driver eye height that might block LIDAR visibility against approaching hazards (fences, signs, bushes, etc.).

LIDAR will be one of those technologies, like physical keyboards on a smartphones, there were thought to be essential for decades, but will seem 'obviously superfluous' in hindsight, once there's a company that shows how to do it right, like Apple did it with the iPhone or like Tesla is doing it with FSD.
 
That is, of course, not how LIDAR is used; if it were used that way, you'd have LIDAR picking up lane lines and the like (ever see that in a Waymo LIDAR demo?). It's a research topic, but AFAIK nobody is actually using it. And more to the point, the problem that you face is that a beam reflecting off of a 90% reflectivity surface lying at an angle of 10° relative to the beam path yields the same intensity value (~0,156 * attenuated strength) as a beam reflecting off of a 20% reflectivity surface lying at an angle of ~51,3°.

Also, even high end Velodyne LIDAR units don't have the angular resolution to resolve road surface details:

upload_2019-11-3_10-53-40.png

Even if LIDAR had the ability to reliably distinguish objects based on reflectivity, there's very little resolution to truly see road markings as they approach - due to the ever flatter angle the LIDAR beam hits the surface. Pixels in even a $10 2-megapixel camera are much more dense in comparison - and these pixels are immensely valuable in terms of identifying high (relative) speed threats as they develop.

This is also one of the reasons why LIDAR units have to be placed at the top of the car - and resolution of horizontal surface features still sucks.

And most of the volume production solutions I saw to push LIDAR costs below $10,000 involved ... the drastic reduction in angular resolution, so LIDAR will not only have to prove itself with $50k-$100k academic research units, but with the the actual LIDAR units that go into millions of cars.

This is one of the problems of LIDAR FSD approaches being 'sensor limited' - Tesla correctly went for inexpensive but numerous sensors plus superior (visual) computing capacity, relying on Moore's Law (which generally does not cover sensors) to bail them out should they be wrong about the exact computing capacity required for FSD.

Note that there's another very successful high-tech company that broke with decades of common wisdom in their respective industry and went from an 'expensive sensors and special-purpose computing hardware' design to 'inexpensive sensors and off the shelf, redundant computing capacity': SpaceX ...
 
Last edited:
One argument I’m missing in the Lidar vs. camera discussion: Do you want your car to look like a police car? This (Byton) is the cleanest Lidar setup I found so far and it still looks weird:

View attachment 472685

The lidars that will be used in production vehicles are not in final form yet. No one knows if lidar will be necessary, including Tesla and everyone in this thread.

Using Lidar was not an option for Tesla. It was too expensive and too big. It won't be too expensive and too big by the time FSD vehicles are ready for mass production.
 

27 Model 3's manufactured by October 31:

upload_2019-11-3_11-28-40.png

Battery workshop progressing nicely, with roofing underway:

upload_2019-11-3_11-32-26.png

Open soil is temporarily getting covered in green geotextile, I suspect to keep the dust from the road surfaces, and to make it look nicer until fully developed:

upload_2019-11-3_11-38-4.png

Is this 28 Model 3's made by October 31, or some other car?:

upload_2019-11-3_11-40-10.png

#29, #30 and #31 being supercharged?

upload_2019-11-3_11-45-52.png
upload_2019-11-3_11-54-42.png

#32 is the test unit for off-road testing?

upload_2019-11-3_11-47-58.png

#33 just coming out of the factory at 5:30? :D

upload_2019-11-3_11-49-45.png

Interesting looking trucks parked deep inside the factory at 5:42:

upload_2019-11-3_11-51-50.png

(Might just be regular vans though.)

As @KarenRei mentioned it too, the loading docks are mostly empty - likely because it's still trial production ramp-up, with much lower parts and materials requirements.
 
Last edited:
If you are going to question it, you have to find fault in the simple logic:
To solve FSD, you have to solve for vision.
If you solve for vision, lidar is redundant.

Posts about how good lidar is are irrelevant. To make a case for lidar you have to find something that lidar is *required* for that vision/radar/ultrasonics cannot do.

But does having lidar lead to a better system by whatever metrics you care about: safety, drive speed, accessible % of world. Or does it lead to faster development time. Just because others are using it doesn't mean they think vision+radar can't work.
 
  • Funny
Reactions: humbaba
The lidars that will be used in production vehicles are not in final form yet. No one knows if lidar will be necessary, including Tesla and everyone in this thread.

The AP computers used on FSD cars are not in final form right now (pick whether SW and SW + HW will change). All current drivers know Lidar is not necessarily given vision and sufficient computing power (how drivers currently drive). So we know lidar is not neccessary, the only question is the timeline/ cost for a solution that does not require it.

Using Lidar was not an option for Tesla. It was too expensive and too big.

Sure, Tesla putting lidar on all AP cars would have bankrupted the company. That doesn't mean lidar are a good engineering decision.

It won't be too expensive and too big by the time FSD vehicles are ready for mass production.
Umm, aren't you contradicting your opening statement? If lidar is not final, how can you claim the lidar final form will arrive before a vision FSD system is completed? Given Tesla has mass production and is waiting on SW (maybeeee retrofit HW upgrade will be needed, but doubt it), any Tesla solution in the next (final lidar dev time + vehicle development time) would still hit mass production first.


Three ways that your statement would be true:
  1. Vision solves FSD at which point all lidar are in their final form because they were discontinued.
  2. Lidars reach their final form and sit around waiting for the vision NN to be good enough to handle every else, but not good enough for #1.
  3. Vision is 'solved', but insufficient on its own (for all companies) until lidar is good/ cheap/ reliable enough. Meanwhile, vision does not improve to the level of #1.
 
The lidars that will be used in production vehicles are not in final form yet. No one knows if lidar will be necessary, including Tesla and everyone in this thread.

Using Lidar was not an option for Tesla. It was too expensive and too big. It won't be too expensive and too big by the time FSD vehicles are ready for mass production.

You leave out one thing though... At least according to Tesla, mass production of FSD vehicles has started for several years already. The cars just need a simple free motherboard swap and OTA update!
Lets be a bit pessimistic and say that Tesla still needs 1 more year to solve it, LIDAR will not be cheap and mature enough for elegant integration in regular FSD vehicles...
 
The AP computers used on FSD cars are not in final form right now (pick whether SW and SW + HW will change). All current drivers know Lidar is not necessarily given vision and sufficient computing power (how drivers currently drive). So we know lidar is not neccessary, the only question is the timeline/ cost for a solution that does not require it.

This is true for human driver safety only. For X times better safety you need a different existence proof. For what it's worth I think it's likely just not proven yet.
 
  • Disagree
Reactions: hacer