Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Musk: V10 wide release "hopefully end of august" after early access

This site may earn commission on affiliate links.
There is literally no way putting a 1.5lb defeat on the steering wheel can go wrong.
That is such a bad advertisement for AP.

One (of the) thing I don't like about AP/FSD is the way they are figuring out driver attention. This guy just attaches a cheating device and AP is fine - but I hold the wheel all the time but AP doesn't recognize enough torque and I've to keep changing the volume to make AP work.
 
That is such a bad advertisement for AP.

Yes. With a fairly decent long position, it makes me very unhappy to see this. But there is really nothing you can do. All I can hope I suppose is the presence of such a device after an accident would actually be reported...but I doubt it will be, and if it were, it would still be Tesla's fault - and to some extent they would have a valid point (need some sort of better method eventually if L2 is going to persist long term...but people will probably always defeat it I guess), which is why I don't like to see it at all.

Anyway, not intending to derail the thread with the discussion. Interesting video in any case, and I like how it makes the spastic TACC behavior pretty obvious and unassailable (I'm sure there are plenty of similar videos, but I just don't make a habit of spending a bunch of time watching them). (I suppose for it to be unassailable I'd have to present a video of me driving the car from the same vantage point.)
 
Can you point to where in the video you notice this TACC behavior? Thanks.

It's basically easiest to see when he's in the carpool lane. You can start around 12 minutes. 12:44 you can see a little stab of regen - the speed barely changes at all but it's stabby. This is a flat section of the 105 in the LA basin so there really isn't much in the way of hills (and you can see the up/down on the video anyway - obviously there are dips so there will be some variation due to those). Anyway you can watch for many minutes and see what is going on. Look at the bar and the speed. If the speed is relatively constant there should be no change in the bar if the terrain is flat. It's constantly pulsing. There will of course be variation - but just as important as the variation is the rate of change of the variation (a measure of the jerk). There's an instant of jerk on a deceleration event from 60 to 45 a little later in the video. Obviously the more sudden the slowdown the harder it is to eliminate jerk. Human anticipation is really good though.

If I were driving this the bar would be much more smoothly varying. I guess I will have to take a GoPro video at some point...sigh...
 
Last edited:
  • Helpful
Reactions: diplomat33
I wonder if they are trying to train an anti-jerk neural net, instead of just writing a traditional anti jerk smoothing/filtering algorithm like other manufacturers have on ICE vehicles.

Maybe, but seems like the neural net would be better employed doing other things. I guess I could see maybe a neural net could work but I don’t really see the value it adds.
 
It seems like there is a philosophical preference to using neural nets as much as possible, kind of like how the Star Wars prequels used CGI for the sake of CGI at times.

I have observed this behavior at my job which involves data scientists and engineers whose specialty is DNN. For example, I once got in a discussion with one over in sound or speech recognition tasks whether it was better to pre-process audio, for example with an FFT, before passing through a NN or whether it was better to just pass the raw PCM into some sort of NN (I'm oversimplifying a bit).

I think the success in image recognition that didn't come until convolutional neural nets (CNN) shows that real gains come when you can structure the net or pre or post-process the data in a way that makes sense rather than just blindly using multi-layered, even deep, neural nets.
 
Last edited:
I wonder if they are trying to train an anti-jerk neural net, instead of just writing a traditional anti jerk smoothing/filtering algorithm like other manufacturers have on ICE vehicles.
Pretty sure the autopilot currently is just regular software algorithms based on inputs from the neural network. The NN doesn't actually do any of the driving.
 
  • Like
Reactions: Hebert
Pretty sure the autopilot currently is just regular software algorithms based on inputs from the neural network. The NN doesn't actually do any of the driving.

I suspect the current NN is identifying objects of interest and determining bounding boxes for those. This is then used to create the model you see in the display. The way the objects (cars, bicycles, trucks, etc.) shown there bounce around is a strong indicator that they aren't doing any significant post-filtering of the output from the Nets. There are well known and popular approaches to do the image recognition with bounding box (SSD-VGG), and I doubt they've come up with something unique there.

I'm not sure whether or not they are using a NN to take the image net outputs, radar, and other inputs and make a decision as to what to w.r.t. driving decisions. It may be, as you say, a "regular software algorithm", or it could be something like a decision tree. Only Tesla knows.
 
  • Informative
Reactions: AlanSubie4Life
Thanks for the video, very informative.

Pretty clear they haven't made TACC any smoother. :(

There is literally no way putting a 1.5lb defeat on the steering wheel can go wrong.

I took at look at the guy's channel, and in many of the videos he has the weight on the steering wheel. Among other criteria for figuring out who is going to get the beta versions of Tesla software, one of them for me would be whether the person is going to put up a Youtube channel which mocks the safety feature which is the key to allowing the company to survive the development stage!

Since everyone is entitled to their own opinion, after the comment about how TACC was not smooth I checked it out, becuase its really smooth when I use it.

Well, in the video its doing exactly what I expect it to do - maintain a strict distance to the car in front, even if the car in front is not the greatest driver in the world.

I don't expect AP to do what I do, which is often anticipate how other cars are going to react and base a driving decision on that anticipation, rather than actual data.

For example, in LA traffic one of the things I have discovered is that it is very tiring is figuring out whether the car in front of you is slowing temporarily, slowing "permanently" because traffic is backing up, slowing only to change lanes, and the rate of slowing. All of which is tiring. AP doesn't guess. It just keeps however many car lengths to the car in front of me. I friggen love it. That feature alone makes driving in traffic tolerable.

However, I can see how others would either not care, or find simply following the idiot ahead frustrating.
 
I don't expect AP to do what I do, which is often anticipate how other cars are going to react and base a driving decision on that anticipation, rather than actual data.

Yes, and for simple TACC, for the most part, it's not necessary for the system to anticipate other drivers' actions. The computer can react very fast and doesn't get distracted and so even with a brute force approach should be able to slow or stop to avoid a collision within the capabilities of the vehicle. There are exceptions, for example the vehicle in front of you doesn't react in time and collides with the vehicle in front of it. But radar can often "see" both vehicles, so even there it may be able to avoid the collision.
 
I suspect the current NN is identifying objects of interest and determining bounding boxes for those. This is then used to create the model you see in the display. The way the objects (cars, bicycles, trucks, etc.) shown there bounce around is a strong indicator that they aren't doing any significant post-filtering of the output from the Nets. There are well known and popular approaches to do the image recognition with bounding box (SSD-VGG), and I doubt they've come up with something unique there.

I'm not sure whether or not they are using a NN to take the image net outputs, radar, and other inputs and make a decision as to what to w.r.t. driving decisions. It may be, as you say, a "regular software algorithm", or it could be something like a decision tree. Only Tesla knows.

I've been wondering about this ever since I first saw Tesla's Investor Autonomy Day presentation.
There seems to be quite a few different hints about what they are most likely doing through some of Andrej Kaparthy's Q&As and presos. George Hotz's (Comma.ia and first iPhone jailbreaker) perspective is also interesting since he's a building a competing autopilot system using similar principles. (interesting note: he almost got the contract to write AP2.0)
Goerge Hotz believes Tesla's lane change is always the same lane change and quite basic. He believes they (Kaparthy) can/will do much better after the May 2019 autopilot team restructuring. He has some interesting insights on the level of effort of full self driving.

Although I also believe Tesla are primarily using the NN for perception, his engineers (Kaparthy, Stuart (ex-employee now)) have also mentioned that they are using the NN to fine tune parameters for their control algorithms.
I would assume at a high level, the "stack 1.0" control algorithm must be something similar to a classic control loop.
You have a target command and you convert that into an output command in order to achieve that target over time with a certain behavior and adapt based on the system's feedback. I could imagine at a basic level using something like a PID to control a lateral/longitudinal velocity vector and maybe another control loop which would output the target vector in order to trace your path.
Factors from the perception NN would influence this target vector. What I could imagine is perhaps fine tuning the control agorithms over time using the NN.

In order to hear it from the lion's mouth, I think Elon's response at this timestamp explains the current NN utilisation.
 
I've been wondering about this ever since I first saw Tesla's Investor Autonomy Day presentation.
There seems to be quite a few different hints about what they are most likely doing through some of Andrej Kaparthy's Q&As and presos. George Hotz's (Comma.ia and first iPhone jailbreaker) perspective is also interesting since he's a building a competing autopilot system using similar principles. (interesting note: he almost got the contract to write AP2.0)
Goerge Hotz believes Tesla's lane change is always the same lane change and quite basic. He believes they (Kaparthy) can/will do much better after the May 2019 autopilot team restructuring. He has some interesting insights on the level of effort of full self driving.

Although I also believe Tesla are primarily using the NN for perception, his engineers (Kaparthy, Stuart (ex-employee now)) have also mentioned that they are using the NN to fine tune parameters for their control algorithms.
I would assume at a high level, the "stack 1.0" control algorithm must be something similar to a classic control loop.
You have a target command and you convert that into an output command in order to achieve that target over time with a certain behavior and adapt based on the system's feedback. I could imagine at a basic level using something like a PID to control a lateral/longitudinal velocity vector and maybe another control loop which would output the target vector in order to trace your path.
Factors from the perception NN would influence this target vector. What I could imagine is perhaps fine tuning the control agorithms over time using the NN.

In order to hear it from the lion's mouth, I think Elon's response at this timestamp explains the current NN utilisation.

I think I forgot the timestamp, it's 3:34:50.
 
I've been wondering about this ever since I first saw Tesla's Investor Autonomy Day presentation.
There seems to be quite a few different hints about what they are most likely doing through some of Andrej Kaparthy's Q&As and presos. George Hotz's (Comma.ia and first iPhone jailbreaker) perspective is also interesting since he's a building a competing autopilot system using similar principles. (interesting note: he almost got the contract to write AP2.0)
Goerge Hotz believes Tesla's lane change is always the same lane change and quite basic. He believes they (Kaparthy) can/will do much better after the May 2019 autopilot team restructuring. He has some interesting insights on the level of effort of full self driving.

Although I also believe Tesla are primarily using the NN for perception, his engineers (Kaparthy, Stuart (ex-employee now)) have also mentioned that they are using the NN to fine tune parameters for their control algorithms.
I would assume at a high level, the "stack 1.0" control algorithm must be something similar to a classic control loop.
You have a target command and you convert that into an output command in order to achieve that target over time with a certain behavior and adapt based on the system's feedback. I could imagine at a basic level using something like a PID to control a lateral/longitudinal velocity vector and maybe another control loop which would output the target vector in order to trace your path.
Factors from the perception NN would influence this target vector. What I could imagine is perhaps fine tuning the control agorithms over time using the NN.

In order to hear it from the lion's mouth, I think Elon's response at this timestamp [3:34:5] explains the current NN utilisation.

Thank you for posting that. I think what he says supports my assertion. It's important to distinguish what they are doing now from the more future looking stuff he mentions, and even what's in the code branch they are using internally now. I suspect it is going to be very noticeable change in AP behavior when the newer stuff gets released.

My thoughts on this are very similar to yours. A PID loop does make sense, and the way the car behaves around curves reminds me of a slightly underdamped PID loop. Perhaps we could measure the PID constants by subjecting the car to a step function input :) it is fun to speculate.

It is all very exciting. Yes, it's frustrating to have to wait, but on the other hand the car is already great, and what other car out there provides such anticipatory delight?
 
This is promising. Lots of impressive stuff here and they definitely could address a lot of the things people have wanted. We’ll see!

It kind of sounds like it will be...almost great! ;)

Lets assume I don't have early access but I watched all of the publicly available videos and those who said the dancing cars would never go away are going to have to eat their hats. There is all kinds of under the hood stuff going on that makes it clear where this is going, but in and of themselves don't make the big jump must be coming for nav in city.

Lets review:
  • Early AP 2, cars didn't dance, but only showed cars going your direction.
  • Then, cars suddenly started showing up going in different directions, but they started to dance. They only showed as either going the same direction as you or perpendicular to you (or spinning like a 60's disco)
  • Now cars don't dance and its detecting and showing cars going around corners
Any guesses as to the importance of the car knowing what a corner is? /s