Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Vehicles

This site may earn commission on affiliate links.
That's why I am for semi-autonomous vehicles that could only be useful for drivers in emergency situations and not useful at all for terrorists.
IMO complete autonomous vehicles would need a lot of research and development work without really giving any help for normal drivers who only need to be helped in emergency situations.
 
From a Forbes article, originating at the LA Auto Show: Quanergy CEO Louay Eldada states "You cannot build autonomous cars without LiDar, and anyone who thinks differently, please challenge me"...

Quanergy is in the business of manufacturing <drum roll here...> solid state Lidars :rolleyes:

There you go Elon, looks like you are barking up the wrong tree :wink:

http://www.forbes.com/sites/lianeyv...tonomous-vehicles-in-2019-as-2020-model-year/
I refer to my previous comments on the topic: Inside the fake town in Michigan where self-driving cars are being tested - Page 4

I wouldn't make absolute statements like "it can't be done without lidar" but I don't really see why you wouldn't use it, particularly as the cost comes down.
 
From a Forbes article, originating at the LA Auto Show: Quanergy CEO Louay Eldada states "You cannot build autonomous cars without LiDar, and anyone who thinks differently, please challenge me"
Saying "you cannot" makes no sense, because every day millions of cars are driven without lidar. So if humans with their extremely limited sensor suite can drive, there's no reason to think their sensor suite isn't enough. One eyed people drive just fine, so it doesn't even require stereo vision. Deaf people drive, so no need for audio input. We can deduct touch, taste and smell as well. So if a person with just one optical sensor, that has really poor depth of field, no infrared, and slow response times, but gimbaled appropriately, can drive, it only follows that a machine can be made with the same sensor suite to drive.

That isn't to say that using multiple cameras, sonar, radar, lidar, might be easier with today's technology, saying "cannot" is obviously being very short sighted.
 
I refer to my previous comments on the topic: Inside the fake town in Michigan where self-driving cars are being tested - Page 4

I wouldn't make absolute statements like "it can't be done without lidar" but I don't really see why you wouldn't use it, particularly as the cost comes down.

Doug,
I think the relevant question here is whether Tesla plans on using Lidar or not. Per Elons recent tweet, they are going to go gangbusters hiring software engineers to do autonomous vehicles. Elon said this is a super high priority. Well before the software gets written, likely even started, they will have to pick the "hardware platform" that the software can make use of for autonomous driving. Given this, I would think that Tesla pretty much has already selected the hardware suite that they plan on using for the next generation of hardware that will support full autonomous driving. I don't imaging Tesla will roll out an "interim" suite of hardware between now and the final configuration that supports full autonomy. If Tesla thinks they can do it without the Lidar, then the Quanergy guy is dead wrong. As Woof pointed out later, if a human can do it with one eye, then it's really just a matter of making the software smart enough. No doubt it is going to be some seriously smart software.

I think if Tesla rolls out a vehicle with a hardware platform that supports full autonomy in say 12 months, even if the software is lagging like V7.0, then that is the "mother of all" game changers. That says that both Google and Quanergy got it "wrong", since Google is convinced that the Lidar is also required. And that there could be like 50,000 cars produced say 12 - 18 months from now that could become fully autonomous with a single software update. I'm thinking that that is what Elon is thinking with the "super high priority" tweet.

Robert
 
Sure you can get by driving with one eye, but I'd prefer you used both. If Tesla's goal is to do autonomous driving without lidar, I'd expect they can do it. I'm just not seeing why they wouldn't use lidar once the cost comes down. Two dimensional camera data is good. Two dimensional camera data correlated with a 3D point cloud is better.

Also I wouldn't at all be surprised if Tesla rolled out an interim suite of hardware. That's pretty much SOP for them.
 
Does anyone have any thoughts on what happens to the signal to noise ratios when every car on the road has active sonar, radar, and lidar? Imagine the cacophony of a traffic jam on a 6 lane highway, with all the cars bouncing signals off each other. Would be very interesting to visualize all those signals!
 
Ralph Nader against self-driving vehicles.

Proponents have hailed self-driving technology as the next revolution in vehicle safety, potentially with bigger implications than seatbelts and airbags, however Nader predicts that such features will actually exacerbate problems caused by drivers who are not paying attention to the road. "It's leading to the emerging great hazard on the highway, which is distracted driving," the safety advocate told Automotive News.
Ralph Nader: Self-driving cars to worsen distracted driving | LeftLaneNews
 
Will it recognise flashing? If you come to an impasse with an autonomous car and you flash it to go first, will it do it?

Not sure if it is illegal in the states to do it anyway.
 
On the why we need LiDar for Level 4 (Fully Autonomous) it is simply for overlap within the electromagnetic spectrum. Visible light cameras, radar and sonar all have overlap. To get to Level 4 you'll need to have an overlap that can detect small moving objects at speed during varying/challenging environmental conditions and at a higher forward FOV (Field of View) than a human

For instance: A fully autonomous car has to be much better at seeing (sensory overlap and data that cannot be seen by a human) than a human as it won't be to handle contextual information as well....at first. This gets much better as the overall 'learning' is more enhanced from each mile driven and shared with the overall general learning of the fleet.

This rabbit hole goes very deep, but think of how hard it would be to make a decision to stop, turn, slow down or speed up if you didn't have any contextual information about yourself or the environment.

Also, realize that we'll be living with Level 3 systems for years prior to a vehicle being capable of Level 4 in all situations, environmental conditions and extreme scenarios. The Google car will most likely start as Level 4, but limited to speed (<25MPH), travel radius (<5 miles), environmental conditions (middle of the day and no rain or wind) and nominal scenarios (grocery store or mall trips). It might not tackle any areas with hills or difficult intersections for instance.
 
For those worried about malfunctions, there will be redundancies built in.. I'm thinking at least 3 levels of redundant sensing mechanisms (cameras, radar, sonar are already somewhat redundant). I'm thinking cameras/radar will do heavy lifting with failsafes built in. They will work independently of each other and work in case one of the systems experiences an error. The car will also pull over to the shoulder as soon as it is safe if one of the systems is malfunctioning.
 
For those worried about malfunctions, there will be redundancies built in.. I'm thinking at least 3 levels of redundant sensing mechanisms (cameras, radar, sonar are already somewhat redundant). I'm thinking cameras/radar will do heavy lifting with failsafes built in. They will work independently of each other and work in case one of the systems experiences an error. The car will also pull over to the shoulder as soon as it is safe if one of the systems is malfunctioning.
In which case recognizing a shoulder becomes critical, as in driving south on Hwy 1 along Big Sur!!