Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
How do you know that? How many radars do you actually need, and why? If someone released a car with twice that number, is that better?

No, twice the radar is not automatically better. But it is reasonable to assume that you need 360 degree radar coverage. So if your car has enough radar to get 360 degree coverage, that would be what you need.

Nobody is saying that more sensors is always better. That's a silly strawman. AV companies will determine the number of sensors that give them 360 degree coverage with good redundancy and that is what determines how many sensors they need.
 
Last edited:
To drive better than a human you do not necessarily need more sensors than a human, but you certainly need a better driving brain.

Accidents do not happen due to the lack of human sensors, but because of miscalculation or lack of calculation.

More sensors might have advantages, but required they are not. I think this is pretty obvious, and, by the way, Elon Musk thinks so too.
 
To drive better than a human you do not necessarily need more sensors than a human, but you certainly need a better driving brain.

Accidents do not happen due to the lack of human sensors, but because of miscalculation or lack of calculation.

More sensors might have advantages, but required they are not. I think this is pretty obvious, and, by the way, Elon Musk thinks so too.
Except any AI expert will tell you that the visual and parietal cortexes of humans are far more advanced than anything machine learning has been able to reproduce, both in capability and speed. Will AI get there one day? Maybe. But in the meantime, the "obvious" way to get autonomous driving on par with human driving is to increase the sensor input so that it is far greater and more accurate than human eyes and ears - technology that we have available today.
 
No, it actually is not a reasonable assumption. Unless you back up at high speed, a camera should be sufficient to identify any threat from the rear.
I don't know about 360 degree radar, but I can certainly see the need for more radar:

1. Long range (300ft) radar forward facing - provide vectors for distant targets to match with long range front camera (Elon calls it "noise," but sensor fusion is a rapidly growing technology area in visual processing).
2. Medium range forward facing 3D (vectoring) radar - extremely accurate speed and direction of near objects (probably superior to speed and direction determined by visual processing from cameras)
3 and 4. Long range side facing radar in front bumpers - provide speed and distance of vehicles potentially crossing path (could be weighted in the NN to provide a safety threshold when making right and left turns into and/or across traffic)
5 and 6. Long range left and right rear facing radars - provide vectors for distant, potentially overtaking targets to match with repeater-type cameras.
 
Last edited:
Except any AI expert will tell you that the visual and parietal cortexes of humans are far more advanced than anything machine learning has been able to reproduce, both in capability and speed.
That is beside the point. Humans cause lots of accidents in spite of the far more advanced capabilities you conjure. These wonderful capabilities seem to do little to prevent accidents. The cause is elsewhere.

Even if an autopilot gets into a situation where it is limited by sensory input, it still has the perfectly safe choice of driving more slowly or call for help to resolve the problem. My personal impression is, however, that such situations are rare. In almost all situations the cameras of a Tesla seem to suffice quite nicely.

That leaves the argument that in such rare situations an autonomous car with more sensors could drive faster. I'll grant that, and everybody may judge how important it is in relation to the price.
 

A few thoughts:

The system alerted the driver to take over when he had his phone in front of his eyes, presumably blocking the in-cabin camera from seeing his eyes. So while the system does let you take your hands off the wheel and play a game or watch a video on the screen when activated, it still needs to detect your face to know that you are able to take over if needed.

The system automatically used the horn when the car in front was reversing and going to hit them. It also alerted the driver to take over.

The system also alerted the driver to take over when it detected an emergency vehicle approaching from behind.

The guy in the video says that the system will give the driver up to 10 seconds to take over but in the video, it seems like the system asked the driver to take over pretty quickly. There did not seem to be a few seconds of advance notice. Personally, I am a little wary of how well the "10 second" rule will actually work in real world conditions.

The system also requires a car in front before it will be activated. The guy in the video says it is a safety feature to make sure the car does not hit something it failed to detect. This is an example of limiting the ODD in order to try to reduce safety risks.

Honestly, the system feels very similar to current AP. It is just doing lane keeping and maintaining distance from a lead car on the highway. The only difference is that because the ODD is more limited and there are safeguards (lead vehicle) there is no hands on wheel requirement when the system is activated. But the system alerts the driver to take over, similar to AP. I feel like if Tesla were willing to limit the ODD in the same way and had good driver monitoring, Tesla could probably make AP on the highway L3.

I feel like L3 is an intermediary step between driver assist (L2) and full automation (L4+). It reduces the hands on wheel requirement compared to L2 but without going full driverless yet. I still don't think there is a ton of value in L3 systems. Personally, I prefer the approach of AV companies like Waymo and Cruise to just focus on L4 from the start. I think it makes more sense to develop a system that is good enough that the human can be just a passenger all the time. It avoids the difficulties with letting the driver not pay attention in some instances but then also needing the driver to take over in other instances.
 
  • Like
Reactions: drtimhill
That is beside the point. Humans cause lots of accidents in spite of the far more advanced capabilities you conjure. These wonderful capabilities seem to do little to prevent accidents. The cause is elsewhere.

Even if an autopilot gets into a situation where it is limited by sensory input, it still has the perfectly safe choice of driving more slowly or call for help to resolve the problem. My personal impression is, however, that such situations are rare. In almost all situations the cameras of a Tesla seem to suffice quite nicely.

That leaves the argument that in such rare situations an autonomous car with more sensors could drive faster. I'll grant that, and everybody may judge how important it is in relation to the price.
Precisely .. humans are far superior in almost every way compared to AI today, except for one critical one .. attention span, where the car wins every time. (Reaction time is another, but that’s a different story.)
 
The system also requires a car in front before it will be activated. The guy in the video says it is a safety feature to make sure the car does not hit something it failed to detect. This is an example of limiting the ODD in order to try to reduce safety risks.
I guess they could program it to find a Tesla on AP to follow :)
 
A few thoughts:

The system alerted the driver to take over when he had his phone in front of his eyes, presumably blocking the in-cabin camera from seeing his eyes. So while the system does let you take your hands off the wheel and play a game or watch a video on the screen when activated, it still needs to detect your face to know that you are able to take over if needed.

The system automatically used the horn when the car in front was reversing and going to hit them. It also alerted the driver to take over.

The system also alerted the driver to take over when it detected an emergency vehicle approaching from behind.

The guy in the video says that the system will give the driver up to 10 seconds to take over but in the video, it seems like the system asked the driver to take over pretty quickly. There did not seem to be a few seconds of advance notice. Personally, I am a little wary of how well the "10 second" rule will actually work in real world conditions.

The system also requires a car in front before it will be activated. The guy in the video says it is a safety feature to make sure the car does not hit something it failed to detect. This is an example of limiting the ODD in order to try to reduce safety risks.

Honestly, the system feels very similar to current AP. It is just doing lane keeping and maintaining distance from a lead car on the highway. The only difference is that because the ODD is more limited and there are safeguards (lead vehicle) there is no hands on wheel requirement when the system is activated. But the system alerts the driver to take over, similar to AP. I feel like if Tesla were willing to limit the ODD in the same way and had good driver monitoring, Tesla could probably make AP on the highway L3.

I feel like L3 is an intermediary step between driver assist (L2) and full automation (L4+). It reduces the hands on wheel requirement compared to L2 but without going full driverless yet. I still don't think there is a ton of value in L3 systems. Personally, I prefer the approach of AV companies like Waymo and Cruise to just focus on L4 from the start. I think it makes more sense to develop a system that is good enough that the human can be just a passenger all the time. It avoids the difficulties with letting the driver not pay attention in some instances but then also needing the driver to take over in other instances.
Does seem to operate very similar to other hands free L2 due to so many limitations, even in this very controlled demo. There was discussion upthread about how car makers might or might not be able to blur the lines between the various levels, with claims that L3 will be a very clear difference, but if you look at how it operates, not necessarily so. I'm skeptical the public will necessarily think a L3 limited so much is "better" than a door-to-door L2 (especially if some manufacturer creates a "hands-free" version).
 
Precisely .. humans are far superior in almost every way compared to AI today, except for one critical one .. attention span, where the car wins every time. (Reaction time is another, but that’s a different story.)
When I see how people drive, I conclude that there are more shortcomings in humans than just an insufficient attention span. They drive too closely behind other cars. They perform unsafe lane changes. They drive too fast in adverse conditions. They lose control in unusual steering situations. They ignore the possibility that there is something in their way that is obscured such that they temporarily cannot see it, i.e. they drive according to the rule, what I don't see, is not there.

To put it simply, many humans have a problem with making proper, logical, safe decisions in the real world. I cannot tell whether they lack the capability to think properly or whether they have that ability and just wilfully or lazily decide not to use it. And this causes practically all accidents.

That is why I cannot accept the argument that
humans are far superior in almost every way
Consequently, the argument that more sensors are needed to make autopilots better than humans is unfounded. To me it is obvious that, once it is properly programmed, a good autopilot with just cameras and perhaps ultrasound sensors will be able to drive much more safely than the average human.

The question remains whether additional sensors may allow an autonomous car to safely drive faster. I think that is currently unimportant and may become important when different autonomous cars compete for speed at high safety levels. As long as they compete with humans, more sensors are not needed. What's needed first is better artificial intelligence. Elon Musk understands this. He is right.
 
The guy in the video says that the system will give the driver up to 10 seconds to take over but in the video, it seems like the system asked the driver to take over pretty quickly. There did not seem to be a few seconds of advance notice. Personally, I am a little wary of how well the "10 second" rule will actually work in real world conditions.
The car doesn't notify you 10 seconds before a situation develops. It just drives very conservatively for 10 seconds, then stops completely with hazard lights flashing if you still haven't responded.

Honestly, the system feels very similar to current AP. It is just doing lane keeping and maintaining distance from a lead car on the highway. The only difference is that because the ODD is more limited and there are safeguards (lead vehicle) there is no hands on wheel requirement when the system is activated.
There is no eyes on road requirement. That's what makes it L3. We already have "hands off L2", e.g. Supercruise.

But the system alerts the driver to take over, similar to AP. I feel like if Tesla were willing to limit the ODD in the same way and had good driver monitoring, Tesla could probably make AP on the highway L3.
Tesla's main problem is when the lead car changes lanes to avoid a stopped vehicle or other stationary object. Tesla accelerates into the object. The FSD stack will probably handle that better than the AP stack, but it's still just guessing.

I feel like L3 is an intermediary step between driver assist (L2) and full automation (L4+). It reduces the hands on wheel requirement compared to L2 but without going full driverless yet. I still don't think there is a ton of value in L3 systems.
L3 is not about hands on wheel. It's about attention -- it frees you up to do other things. The millions who commute through traffic every day should get a lot of value. Of course there's no benefit when every 60 seconds brings a new construction zone, ambulance or car backing into you. But most commuters drive through heavy traffic, not scripted demos.

Personally, I prefer the approach of AV companies like Waymo and Cruise to just focus on L4 from the start.
The choice for carbuyers the next 5+ years isn't L3 or L4. It's L3 or nothing.
 
  • Like
Reactions: daktari
The car doesn't notify you 10 seconds before a situation develops. It just drives very conservatively for 10 seconds, then stops completely with hazard lights flashing if you still haven't responded.

Thanks but my concern still stands. Giving the driver a sudden "red hands on wheel" alert at the last second is not going to work since the driver may not have been paying attention. With L3, you have to notify the driver in advance since the driver is not required to pay attention. The system has to give the driver enough time to re-engage with what is going on.

There is no eyes on road requirement. That's what makes it L3. We already have "hands off L2", e.g. Supercruise.

L3 is not about hands on wheel. It's about attention -- it frees you up to do other things. The millions who commute through traffic every day should get a lot of value. Of course there's no benefit when every 60 seconds brings a new construction zone, ambulance or car backing into you. But most commuters drive through heavy traffic, not scripted demos.

Yes, I know that L3 means "eyes off". But in the video, the system disengaged when it could not see the driver's eyes. So even though the driver is allowed to look at the screen, play a game, the system still seems to monitor the driver's eyes. And the system also asked the driver to take over pretty quickly. So I am not sure how "eye's off" the system really is.

Yes, L3 could be useful on long commutes in congested traffic where you could do other things while stuck in a traffic jam. But I still find that of limited use.

The choice for carbuyers the next 5+ years isn't L3 or L4. It's L3 or nothing.

I have to disagree with you there. I think there will be L4 consumer cars that the public can buy in the next 5+ years. Mobileye is promising L4 consumer cars in 2-3 years. And with the great progress we are seeing with L4, I would be surprised if Baidu, Cruise or Waymo, don't offer L4 on consumer cars in 5+ years. In fact, L4 will probably be solved in the next 5 years IMO.
 
Last edited:
Thanks but my concern still stands. Giving the driver a sudden "red hands on wheel" alert at the last second is not going to work since the driver may not have been paying attention. With L3, you have to notify the driver in advance since the driver is not required to pay attention. The system has to give the driver enough time to re-engage with what is going on.



Yes, I know that L3 means "eyes off". But in the video, the system disengaged when it could not see the driver's eyes. So even though the driver is allowed to look at the screen, play a game, the system still seems to monitor the driver's eyes. And the system also asked the driver to take over pretty quickly. So I am not sure how "eye's off" the system really is.

Yes, L3 could be useful on long commutes in congested traffic where you could do other things while stuck in a traffic jam. But I still find that of limited use.
I agree, it's a problem with Mercedes' specific implementation. If anything blocks the eye monitoring camera, it seems to disable the system. This is something that can easily happen if someone was using their personal devices (like their phone in the above example, or a laptop), which kills one major use case for L3 (where presumably the idea is you should be able to do some work while stuck in traffic). They need to put the camera somewhere that someone using a phone or other device won't block. Requiring the user to use the car screen is a non-starter, especially given how poorly most automaker UI is made, and especially with a screen that is "integrated" and is further from the driver (and can't be rotated or moved closer to driver).

The need for eye tracking however is likely to detect if driver is asleep. Because L3 gives you only seconds of notice, the system needs to make sure that driver can take over in that period. There are players like Volvo that suggested L3 is the most dangerous mode for this reason (given it "allows" to you have full inattention and yet still needs driver to take over given seconds of notice, which they may not be able to task switch in time).
I have to disagree with you there. I think there will be L4 consumer cars that the public can buy in the next 5+ years. Mobileye is promising L4 consumer cars in 2-3 years. And with the progress we are seeing with L4, I would be surprised if Baidu, Cruise or Waymo, don't offer L4 on consumer cars in 5+ years. In fact, L4 will probably be solved in the next 5 years IMO.
Is Mobileye really promising L4 in consumer cars as opposed to fleets? Correct me if I'm wrong, but Baidu, Cruise, and Waymo has zero plans to offer L4 on consumer cars, they are sticking to fleets. Their whole operation model is all based on a fleet with support infrastructure behind the scenes (not on a consumer vehicle that operates largely on its own).