Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Google self-driving Cars are officially on the road

This site may earn commission on affiliate links.

replicant

Active Member
Aug 24, 2014
1,385
6,789
France
Source:
Notice anything new on the streets of Mountain View, California? Our latest prototype vehicles are ready for the road and a few of them are now cruising around town!


These prototype vehicles are designed from the ground up to be fully self-driving. They’re ultimately designed to work without a steering wheel or pedals, but during this phase of our project we’ll have safety drivers aboard with a removable steering wheel, accelerator pedal, and brake pedal that allow them to take over driving if needed. The prototypes’ speed is capped at a neighborhood-friendly 25mph, and they’ll drive using the same software that our existing Lexus vehicles use—the same fleet that has self-driven over 1 million miles since we started the project.

As we start to cruise around the neighborhood, we really want to hear what our neighbors think. To learn more about our project or to leave feedback on how we’re driving, please visit our website:Google Self-Driving Car Project.

See you on the road!

Beta%u00252Bpublic%u00252Broads%u00252Bedit.jpg


home-where.jpg

More info at Google Self-Driving Car Project
 
Notice anything new on the streets of Mountain View, California? Our latest prototype vehicles are ready for the road and a few of them are now cruising around town!


Yep, I saw them for realz:
sdcar1.png

sdcar2.png


And a garage full of them (Lexus hybrid & Google NEV):
sdcar3.png


The future is upon us.
 

Self-driving cars: already driving unsafely.

Like I predicted.

The Google engineers have proven that they can't be trusted to program a self-driving car to drive safely. Liability concerns are going to take these things off the road.

The next time Google dangerously cuts someone off, they're likely to get a serious lawsuit. Hopefully it will cost them enough that they'll back off on their idiot plan to put incompetent software on the road.
 
Self-driving cars: already driving unsafely.

Like I predicted.

The Google engineers have proven that they can't be trusted to program a self-driving car to drive safely. Liability concerns are going to take these things off the road.

The next time Google dangerously cuts someone off, they're likely to get a serious lawsuit. Hopefully it will cost them enough that they'll back off on their idiot plan to put incompetent software on the road.

Can't tell if you are serious or not. I'm not saying automated cars are safer than every human driver on the road, however I'd bet that today the are already safer than the average human driver. And in 10-20 years I can't see any way they won't be safer than any of us possibly could be.
 
Self-driving cars: already driving unsafely.

Like I predicted.

The Google engineers have proven that they can't be trusted to program a self-driving car to drive safely. Liability concerns are going to take these things off the road.

The next time Google dangerously cuts someone off, they're likely to get a serious lawsuit. Hopefully it will cost them enough that they'll back off on their idiot plan to put incompetent software on the road.

You must have missed the update: UPDATE: Self-driving car operator denies near collision with Google self-driving car | Fusion

Sounds like a non-story.
 
Can't tell if you are serious or not. I'm not saying automated cars are safer than every human driver on the road, however I'd bet that today the are already safer than the average human driver. And in 10-20 years I can't see any way they won't be safer than any of us possibly could be.

I second this. Even IF self-driving cars occasionally do something boneheaded the question is are they doing so less often than humans. If the answer is yes, bring on the self driving cars.
 
Airbags do kill people. Literally. Many die because of airbag deployment, and without airbag many would have walked away alive. Sure, airbags save many more lives than they take away. But point is, societies figured out how to deal with problem and airbag manufacturers do not face exorbitant lawsuit just because tech is not perfect and someone(including children) got killed here or there. Point is, self driving tech do not had to be perfect to be mass deployed. It just had to save lives on average.

As for Google, Google researchers are working on a bleeding edge of computer vision, often holding state of the art published results on tough benchmarks. For example Google ended up on 1st place in 2014 ImageNet competition with somewhat cumbersome but efficient GoogLeNet convolutional architecture. Lots and lots of research coming out from Google research labs from around the world. So it is really cool that Google is helping to develop self driving tech, Google got some of the world's top researchers to help to develop the tech.
 
There is a certain belief that some have, and expressed here by several posters, that self driving cars will "never" happen. Meaning, they will never be able to replace human drivers on a large scale. The basic argument of these posters is that there is "always" going to be one or more situations where a self driving car is not able to correctly respond as fast as a human being who is paying attention, given the very large number of unexpected things that can and do happen on the road.

While this is certainly true, they seem to be ignoring the bigger picture. When you reduce traffic accidents and fatalities by say 90%+, you are saving 27,000 lives per year in the US alone. OK, so the software one day encounters a situation that is totally unexpected and the car runs over a small child and kills the child. That becomes the headline of the year, much like when the first Tesla caught fire and burned up. So, one person dies and we then ignore the 27,000 other lives saved, and stop automated vehicle deployment? Come on.

Yes, there will be the front page news for days on end. Yes, there will be a huge lawsuit. And yes, Google/Bosch/Tesla/Audi/Ford/Whomever may win or lose that lawsuit. Simply doesn't matter in the bigger picture. Laws will be in place to handle these situations, such that the rollout can proceed. The reason is several fold: As it stands now, the cars have already been proven safer than human drivers. Second, every situation that those cars encounter that is "unexpected" is recorded and analyzed back at the Googleplex. After which any required software modifications to the control algorithms are implemented, regression tested against all previous questionable encounters, then rolled out to the fleet. The car that had been 99.9999% safe is then 99.99999% safe. Lather, rinse, repeat... Third, the cost savings dwarf the implementation costs, including lawsuits. I pay about $1,000 per year for auto insurance. Assume half of that is collision. If there are 150,000,000 drivers doing that, you come up with $75 billion per year saved in insurance costs alone due to reduced collisions. A wrecked car is a valuable asset that gets wasted, no more of that either. Ok insurance companies will have to find some new revenue sources. There will also be fewer vehicles sold, since each vehicle will be better utilized and not sitting idle 97% of the day. This is a societal good, as is fewer required parking spaces. Denser city cores where cars are needed less. No more traffic jams on freeways cause the cars are all talking to each other and spaced more closely. On and on and on and on and on...

It's just hardware and software. Seems silly to be betting against improved hardware and software in this day and age. There will come a time where most people won't be allowed to drive cars, perhaps unless they get a special permit. Most likely historical vehicles. You will be telling your grand children bedtime stories that start with "One day grandpa was driving his car...", and they will interrupt you and ask what that means.

And I will also bet you that certain designated freeways will be converted first to having an "automated vehicle only" lane, then two, then eventually the entire freeway. I look forward to the coming revolution. Maybe even make a few $ in the process :smile:

RT
 
Airbags do kill people. Literally. Many die because of airbag deployment, and without airbag many would have walked away alive. Sure, airbags save many more lives than they take away. But point is, societies figured out how to deal with problem and airbag manufacturers do not face exorbitant lawsuit just because tech is not perfect and someone(including children) got killed here or there.
Actually, they did. The fact that the government had mandated airbags meant the government ended up paying out.

They then changed the airbag design, but I still know people who got the kill switches for good reason.

Point is, self driving tech do not had to be perfect to be mass deployed. It just had to save lives on average.
This is dead wrong; the psychology of this is quite well understood because of the history of fully automated trains and airplanes. It has to be about 100 times better.

Don't get me wrong, I'd love to see it, but *automated trains have worked perfectly since the 1970s* and we still can't get them deployed anywhere with grade crossings! There's a psychological demand among most people to have a driver driving the car.

- - - Updated - - -

T
And I will also bet you that certain designated freeways will be converted first to having an "automated vehicle only" lane, then two, then eventually the entire freeway.

Now, that's much more likely. I can certainly see automated driving in a freeway-only role (no grade crossing), or in a urban-center-below-20-mph role (nobody minds too much if there's a collision).... just not in the intermediate role of fast rural roads and fast semi-rural roads.

Which is the only important one, frankly. Private cars are fundamentally impractical in big cities due to congestion. Urban freeways are an inefficient and impractical construction which is barely affordable, and will at best become an exotic toll-road scheme for the rich. Basically, cars are best for rural areas. And that's where automated cars are so far behind that they won't be ready in 50 years.

And since automated cars won't be mandated on those roads... Google *will* be hit with *enormous* penalties the first time they kill someone by running automated on a rural road.

Our grandchildren, if they survive global warming, will know exactly what "driving a car" means. Even if only the farmers actually do it.

Musk is a very smart man, and he isn't expecting fully automated cars. He's figured out what you can implement without triggering the "OH GOD ROBOT CARS ARE KILLING US ALL" reaction -- while keeping a 'responsible' driver in the driver's seat.

One possible scenario is for anti-collision technology to be mandated; so that drivers are still driving the car, but if they do a maneuver which puts them at risk of colliding, their car will simply refuse to respond. "No, you can't follow the guy in front of you that closely. No, you are not permitted to swerve into that tree." This would be a vast improvement in safety without triggering the 'automatic killer robot car' reaction.
 
Last edited:
AI doesn't progress exponentially. Empirical fact.

The difficulty of pattern-matching problems has been underestimated by researchers (and even more underestimated by the general public) repeatedly for the last 50 years. Our best pattern-matching programs still *suck*.

A large part of this is that we still haven't figured out how humans do pattern-matching -- and humans do it *very very well*. And scientific research of the "how does this existing thing operate" variety *also* doesn't progress exponentially -- pretty much linear with jumps. When people say "We'll need a breakthrough to do this", they're not kidding.

This is in contrast to engineering-type research, which can proceed exponentially or faster.

I don't see any reason the human in the car should be doing anything but the pattern-matching to watch for 'weirdness'. Everything routine should be automated. But that isn't fully automated: the human has to be alert at all times.
 
AI doesn't progress exponentially. Empirical fact.

The difficulty of pattern-matching problems has been underestimated by researchers (and even more underestimated by the general public) repeatedly for the last 50 years. Our best pattern-matching programs still *suck*.

A large part of this is that we still haven't figured out how humans do pattern-matching -- and humans do it *very very well*. And scientific research of the "how does this existing thing operate" variety *also* doesn't progress exponentially -- pretty much linear with jumps. When people say "We'll need a breakthrough to do this", they're not kidding.

This is in contrast to engineering-type research, which can proceed exponentially or faster.

I don't see any reason the human in the car should be doing anything but the pattern-matching to watch for 'weirdness'. Everything routine should be automated. But that isn't fully automated: the human has to be alert at all times.

I'd have to disagree about that. I'm able to search for things like "party" through my Google image library and it'll pull up results with no labelling or categorization whatsoever.

The Drive PX autonomous driving demos do an incredible job of labeling and identifying things in a driving situation.
 
I'd have to disagree about that. I'm able to search for things like "party" through my Google image library and it'll pull up results with no labelling or categorization whatsoever.
And the Google algorithm for identifying gorillas in photos was marking black people as gorillas. Look it up.

The parts which are hard aren't the parts which you, as someone who hasn't looked into it, *think* are hard.

The Drive PX autonomous driving demos do an incredible job of labeling and identifying things in a driving situation.
No, actually it stinks. It's ASS. IT's AWFUL. And yes, I've seen the demos, which are shown for advertising purposes and are therefore Panglossian.

Here's the reason people are overly optimistic about this stuff: people, on the whole, don't have a proper perception of which parts of the problem are hard to automate. Computers do stuff which is dead easy for computers and people are very impressed because that stuff is hard for humans. They fail to do little (but important) things which are dead easy for humans, and the casual observers don't notice because that stuff, in their heads, seems so easy they take no notice of it.

The future, for the next 50 years or so, is, well, I suppose you could say cyborgs. Computers doing what they're best at, humans doing what computers suck at. Which is actually a pretty optimistic vision of the future.

I'm glad Musk has figured this out.
 
Last edited:
Here's the reason people are overly optimistic about this stuff: people, on the whole, don't have a proper perception of which parts of the problem are hard to automate. Computers do stuff which is dead easy for computers and people are very impressed because that stuff is hard for humans. They fail to do little (but important) things which are dead easy for humans, and the casual observers don't notice because that stuff, in their heads, seems so easy they take no notice of it.


I think that's precisely the reason why things like Google image search and Drive PX are so incredible now. Why do you not like where Drive PX and Mobileye are now?
 
I agree with @neroden. Software driven image recognition is incredibly primitive compared to a human child's ability to recognize an incredible variety of objects and evaluate complex situations. Computers will gradually get better at image recognition but it is going to be a gradual improvement. The autonomous driving demos that companies release should be viewed as advertising, not as reality.
Tesla Autopilot as described by Elon is coming, this year I hope, but when self driving cars that are better than humans in any situation are coming I don't know.