Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
There's a third mode:
  • The car has detected a potential (avoidable) collision and the driver starts to depress the brake pedal: the AEB immediately converts that into a 100% brake application.

I have experienced this while I was changing lanes. The car in front decided to change lanes at the same time, the collision warning came on and the moment I pushed the brake the car performed emergency brake. I could feel the brake pedal going down faster than I would press it. It was a strange felling but at the same time quite reassuring.
 
  • Informative
Reactions: GoTslaGo
AEB wasn't just a crash mitigation system even in AP1. The NHTSA report into the Florida crash gave quite a lot of detail on the behaviour of AP1, in particular as well as the two well-known modes of operation:
  • Collision warning - the car detects a potential (but not yet inevitable) collision and sounds a warning but does not intervene.
  • AEB - the car detects an inevitable collision and apples the brakes to mitigate it.
There's a third mode:
  • The car has detected a potential (avoidable) collision and the driver starts to depress the brake pedal: the AEB immediately converts that into a 100% brake application.

AEB and FCW are separate systems with different purposes.

As to that third mode that was something they added in within a SW update when they realized that AEB wasn't activating because people were braking, but just not braking hard enough

In any case I was just picking a known point back in time when we knew exactly what AEB was designed to do. The last time it was clear to me was before firmware 7 with AP1. At that time the manual made it abundantly clear that it was a crash mitigation system only, and would drop the speed by 25 mph in the event of an impending unavoidable accident.

I needed something clearly defined to illustrate how a shadow mode could be used to detect false positives, or false negatives. Obviously AEB didn't remain fixed with what it used to be.
 
So far, the only thing all production NNs do is identify patterns. The Tesla and Mobileye NNs identify lane markers, signposts, cars, etc. What they don't do is a very long list. They don't give the car steering and accelerator inputs, that's done by traditional programming.

Is that true? That's surprising to me. What about the neural networks that aren't currently controlling production cars? For instance, is the steering, accelerating, and braking in Waymo's test cars controlled by neural networks?
 
AFAIK, waymo relies heavily on lidar point clouds and high resolution maps (these are not normal maps). They don't actually use many NNs at all. It is mostly traditional programming. At its core is a statistical decision engine that decides what to do based on probabilities of what it thinks it is sensing.

Tesla and Waymo have quite different approaches. Waymo makes sure it isn't bumping into anything by having a lidar sweep through the environment. Tesla does this by visual processing, but the only way to decode a visual scene is through a NN, so that's what Tesla has to use.

Waymo is far, far, more sophisticated when it comes to programming the rules of the road, what stop signs mean, what signal lights mean, even having algorithms like creeping forward at a 4 way stop sign to signal intent that you'll be the next to go. But it is all hand coded.
 
waymo ..They don't actually use many NNs at all. It is mostly traditional programming. At its core is a statistical decision engine that decides what to do based on probabilities of what it thinks it is sensing.

Tesla and Waymo have quite different approaches. Waymo makes sure it isn't bumping into anything by having a lidar sweep through the environment. Tesla does this by visual processing, but the only way to decode a visual scene is through a NN, so that's what Tesla has to use.

Waymo uses NN for objection detection just like everyone else.
This applies to both to their lidar data and their camera data (as they have 12 cameras based vision system in addition to all their lidars)
 
  • Helpful
  • Like
Reactions: croman and GoTslaGo
I think a lot of the confusion here comes from Nvidia’s BB8 concept car, which is the only end-to-end ‘driving network’ that’s been demonstrated, as far as I know. I wouldn’t want to get in it.

NNs are good at generalizing stuff... that’s why they’re good at image processing tasks. You can show it lots of stuff, and it’ll generalize it. It’s not flawless. It’s hard to understand exactly what it’s doing in each case. It’d be very difficult to make safe, predictable driving control that way - and very inefficient.

But, it’s a super speedy way to get good feature classification in images, which is crucial. It opened the floodgates to everyone being able to get great object detection and classification - which took Mobileye years and years to do with classical techniques (SIFT and HOGS and all that). Hence the “vision” part of “Tesla Vision”.

Beyond that, I don’t know there are any significant neural network breakthroughs that would be beneficial or practical for autonomous driving (and I’m not sure any are even needed). Feed it image data, it works out where it is in the world, and what not to bump into. Feed it map data, and it knows where it should go, how fast, which lane, what the rules of that road are etc. Put the two together, and you’ve got a system that knows where to go, and how to get there without crashing.

As Elon said a couple of years ago: “we know what to do, and how to do it, and we’ll be there in a few years”. We’re just in the waiting room at the moment.
 
Yes, I think the confusion over NN capabilities stems from people like Elon himself. He doesn’t waste an opportunity to scare people about AI so people naturally think that thinking machines are just around the corner. Have a deep conversation with Siri sometime to get an idea of how far we are from that.
 
  • Like
Reactions: Kant.Ing
Why are the AI guys all using Python? Are there some C++ or C libraries around?

Python has some great open-source libraries for "AI".

- For data pre-processing, like numpy and pandas.
- A robust Machine Learning library, Scikit-Learn.
- And on top of that, the two major open-source frameworks today run on Python : TensorFlow (Google) and PyTorch (Facebook), plus a high-level library Keras, to use on top of TensorFlow.
 
Last edited:
  • Like
Reactions: Ovulation
If you're interested in Neural Networks and Deep Learning, can code a bit but do not have a PhD in Maths :p, there's a great free online course by the Data Institute (University of San Francisco) and Jeremy Howard (ex- McKinsey, Kaggle GrandMaster, President of Kaggle & more).

It's called "Deep Learning for Coders" from Fast.ai and was built out of a 7-weeks program (2 hours class + 20 hours student homework, per week).

Here are the detailled video syllabus (warning, it's a massive list but you can zoom in for specific topics):

Part 1 (2016) with Python 2.7, Keras and Theano: Part 1: complete collection of video timelines

Part 2 (2017) with Python 3.5, Keras and TensorFlow: Part 2: complete collection of video timelines

A new version of Part 1 (aka Part 1 V2), using PyTorch, started last week. Its free online version will be released in January 2018.

T.
 
Last edited:
Based on what we know, Tesla are only using NNs for vision tasks. This implies that the actual driving policy is "regular" code, and so driving policy could have been developed seperately, as long as the integration with the NN was defined in advance.

However, I wonder if some aspects of driving policy should be implemented in NNs.

Example might be roundabout policy: in France, roundabouts have two different "right of way" policies - the default is that vehicles entering the roundabout have right of way, but in some cases, vehicles already on the roundabout have right of way. This is indicated by a sign at the entry point. Of course, not every driver reads the signs, so it is possible that people will give way on the roundabout when they shouldn't.
Due to the "fuzzy" nature of the problem, an NN might be better placed to handle the whole roundabout piece than conventional code.
 
anyone ever hear of "switching/spiking" neural nets? are they of any current application? I think they abbreviate them as "SNN". A passing interest google search only shows one or two academic papers all coming out of one Uni in Ohio i believe.....but it seemed like there was some buzz about them in that they require less training or such, but i have no idea. Anyone here heard of this?
 
Does anyone know if the AP gameplan has changed significantly since Sterling Anderson left? I watched a couple YouTube videos where he explained it in ways that made sense to me
. Although that was a year and a half ago and he left Tesla, and a lot of shade has been thrown at him and ap since then so I dunno maybe that is all out of date now? As I understood it a lot of that stuff should be coming to fruition right about now, but it seems like it would fall short by quite a bit if it’s not a lot safer than people.
 
anyone ever hear of "switching/spiking" neural nets? are they of any current application? I think they abbreviate them as "SNN". A passing interest google search only shows one or two academic papers all coming out of one Uni in Ohio i believe.....but it seemed like there was some buzz about them in that they require less training or such, but i have no idea. Anyone here heard of this?

Really? The last hype was about this ...