Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
Yeah, that’s what I was getting at. Also, radar pings apparently give some relative motion information?

Radar pings will give you highly accurate velocity (around ~0.1m/s accuracy) and but only somewhat accurate distance (around 0.5m), with limited if any directional information. When you can integrate that data with vision, hopefully you can get better combined distance AND direction accuracy (and for Tesla, there's only forward radar anyways, so you need to solve vision for both velocity and distance anyways).

Radar is a mixed bag. The closer an object's speed to stationary, the harder it is to pick it out of the stationary background (one of the main problems with hitting stopped vehicles in the past, since vision wasn't developed enough to see them either). For automotive, it has typically practically zero vertical resolution, and typically limited horizontal resolution (though you can improve on these a bit with integrating multiple hits over time as the vehicle moves with some fancy math). On the other hand, radar can see through solid objects (to an extent, depending on what they're made of, etc), and around them (by bouncing under vehicles etc), and through fog and rain. So it is very useful when combined with other sensors, and is one of the main avenues to "super human" perception (as humans can't see in the dark or through fog or around vehicles, etc) but the resolution is limited. Seeing around / through what is in front of you is really the main claim to fame for radar now, as vision gets better at doing everything else.
 
But drive on nav Isn't the L3 highway you are looking for. Most of the perception Tesla needs to drive a L3 highway system isn't even near ready yet. For example detecting road debris, barriers, etc.

Just to clarify you know that I'm looking for an L3 system from other conversations. But, I was trying to limit this convo to just L2 with the drive-on-nav. I certainly agree that things needed for L3 are not there yet like debris detection.

I don't really see drive-on-nav as being gimmicky at all especially with a system that doesn't require user acknowledgement.

To me that system would bring:

Ability to easily pass people without putting effort into getting over, and then getting back. For people with the system it would make them less likely to be people who hung out in the left lane. Assuming the car followed the driving handbook, and it didn't act like a typical american driver.

Ability to adjust aggressiveness of lane changes. Right now V9 is a little too cautious with lane changes where it wants lots of room. So I don't tend to use autolane changes when I'm in condensed fast traffic.

A fundamental shift in how maps are used (from V8.X versions).

A necessary capability that's required for L3. If it can't do lane changes without user confirmation because of a lack of rear radar then it can't do L3.

I'm going to hold off on judging drive-on-nav until Tesla officially releases drive-on-nav without requiring user confirmation. We can't judge it based on alpha builds that hackers have been trying out. Usually builds like that are going to have a lot of dumb mistakes. Plus the one that had the most videos wasn't even in the US where the system was going to be released. So how is that footage fair?

As an AP2.5 owner it brings a lot of enhancements I'm looking for.

Now there is still a lot of stuff that missing. But, what I find encouraging is the feeling that Tesla has traction on incremental improvements. Especially now that they've gotten rid of the baggage that was the FSD option. They haven't closed the book on it, but they did stop selling the option on the website. I expect closure to happen within a month or two in terms of letting us know what's going on with it.

The other thing I find encouraging is Elon Musk is no longer going to be the chairman. I don't believe this go it alone approach is really working out that well at least not in terms of time. Having a fresh perspective will likely help Tesla, and hopefully they can partner up with someone.
 
I don't really see drive-on-nav as being gimmicky at all especially with a system that doesn't require user acknowledgement.

To me that system would bring:

Ability to easily pass people without putting effort into getting over, and then getting back. For people with the system it would make them less likely to be people who hung out in the left lane. Assuming the car followed the driving handbook, and it didn't act like a typical american driver.

Ability to adjust aggressiveness of lane changes. Right now V9 is a little too cautious with lane changes where it wants lots of room. So I don't tend to use autolane changes when I'm in condensed fast traffic.

Its a gimmick if it doesn't perform like its intended to perform. If its constantly missing exits. Doing stupid things like lane changing to the left a mile before an exit. Lane-changing to emergency shoulder, lane changing into a barrier, taking wrong exists, not handling interchange merges.

That is literally the definition of EAP and if it can't do that consistently then it is a gimmick.. Just so you know, i consider poor lane-keeping and adaptive cruise control a gimmick as-well. Anything that does not accomplish what its supposed to do consistently IS a gimmick.

Your Tesla will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near

Lastly the person I quoted was a Early Access Participant who had .41, the version that is supposed to be released soon according to Elon.
We will see, but then again I've never been wrong so...
 
  • Like
Reactions: BinaryField
Its a gimmick if it doesn't perform like its intended to perform. If its constantly missing exits. Doing stupid things like lane changing to the left a mile before an exit. Lane-changing to emergency shoulder, lane changing into a barrier, taking wrong exists, not handling interchange merges.

That is literally the definition of EAP and if it can't do that consistently then it is a gimmick.. Just so you know, i consider poor lane-keeping and adaptive cruise control a gimmick as-well. Anything that does not accomplish what its supposed to do consistently IS a gimmick.

Lastly the person I quoted was a Early Access Participant who had .41, the version that is supposed to be released soon according to Elon.
We will see, but then again I've never been wrong so...

I absolutely agree that it's a gimmick unless it works consistently.

I do expect it to be a bit gimmicky at first.

Where it follows the natural progress of becoming more useful as it gets better. My expectation is that it will be solid once they unleash the drive-on-nav without user confirmation.

I'm not an early access participant so I don't know if it has the drive-on-nav without user confirmation.

It's not really EAP without that.

It's just a progression towards it.
 
  • Like
Reactions: BinaryField
What Karpathy said on the call:

"This upgrade allows us to not just run the current neural networks faster, but more importantly, it will allow us to deploy much larger, computationally more expensive networks to the fleet. The reason this is important is that, it is a common finding in the industry and that we see this as well, is that as you make the networks bigger by adding more neurons, the accuracy of all their predictions increases with the added capacity.

So in other words, we are currently at a place where we trained large neural networks that work very well, but we are not able to deploy them to the fleet due to computational constraints.
"

It's a transcript so it may not be exactly accurate. Anyway @jimmy_d enjoy this lil' morsel you detective.

Interesting he said "neurons" and not "weights"... was he just trying to translate for a general audience?
 
OK, you've convinced me to make one more point which I had in the back of my head.

I think until the how-to-drive problem is properly specified, which it has not been yet, we won't know what the neural network needs to be able to identify, so we won't know how to start working on the NN training.

Example already brought up: it needs to know when it is approaching a blind curve. This is a pretty complex 3d modeling problem -- not just about modeling the things you can see, but the implications of empty space which you can't see. Example from the other end: identifying that there's someone else on the other side of the blind curve requires listening for their horn. (I don't think Tesla's system even has the microphone pickup yet.) Another example: identifying the type of road surface (dirt, gravel, grass) is actually quite important for certain driving tasks.

Until the specification of the how-to-drive problem is nailed down, we don't actually know what the neural network needs to be able to identify. I fully expect that when the how-to-drive problem is being seriously worked on, it will end up requiring a basically-from-scratch retraining of the neural network because it'll have to pick out data which they didn't previously realize was necessary.

This is the problem with not understanding what problem you're working on.

(For this reason, I think Tesla's existing data collection isn't worth much of anything. Having the largest fleet of cars on the road to collect data with will, however, be worth quite a lot sometime in the future when they actually have a self-driving problem specification -- they'll be able to do that training stage much quicker than anyone who doesn't have millions of instrumented cars on the road. They will, however, be starting from scratch at that point.)

Thus the advantage of SW 2.0. You let the NN define itself. Given the way NN training works, and Tesla's hiring of 3-D graphics/ Unreal engine type developers, they may be running n instances of the NN in a virtual world built with image data from the real world + noise/ distortion at max frame rate. Training pass/fail based on crashes/ near misses/ travel speed all automatic (with minimal supervision). The raw data is needed since that is the input to the system.

Interesting case:
4 way stop:
rule based: first come first go, simultaneous arrival: car on right goes first.
Real world: 4 cars arrive at same time. Two cars arrive but car on right doesn't go.

Sometimes, drivers need to ignore the written law/ rule to drive. So a fixed logic system is too fragile.
 
  • Like
Reactions: ironwaffle
On Q3 earnings call Andrej Karpathy confirms they have a much larger NN they are testing that performs with higher accuracy but the AP 2.X hardware isnt powerful enough to run it. Seems like more evidence that this is what @jimmy_d found.

If everything jimmy said is right. that means its an in-efficient network especially if it runs the network at 30fps and near 100% utilization. It would mean there needs to be a stronger hardware. I find that hard to believe.
 
  • Like
Reactions: DeckardsGirl
If everything jimmy said is right. that means its an in-efficient network especially if it runs the network at 30fps and near 100% utilization. It would mean there needs to be a stronger hardware. I find that hard to believe.

I must be misreading: You say the need for more HW means the network is inefficient. That means the requirement network is smaller, it also mean you already know the network size needed for FSD. That indicates you have a completed, functional, tested, and proven FSD system. Yet you are not selling it publicly... o_O
(did you factor in 8 cameras at 30 fps each with the NN using two frames at a time?)
 
@mongo: Correct me if I am wrong, but I think Blader is comparing AP2 network efficiency to MobileEye's networks which appear much more efficient on capability per TOPS; he is not making any assertions to what is ultimately required for FSD. The comparison is useful, but worth taking with a grain of salt considering the MobileEye data basically comes from demos and EyeQ3 performance vs. Jimmy's blackbox analysis of production networks and observation of current AP2 performance. I think the latter comparisons (what EyeQ3 can do with .25 TOPS vs AP2s 10+ something TOPS in production) are interesting. The comparison between MobileEye demos and Jimmy's analysis of the neural net are a much more difficult to draw accurate conclusions from, but still worth speculation/discussion.
 
What Karpathy said on the call:

"This upgrade allows us to not just run the current neural networks faster, buThe reason this is important is that, it is a common finding in the industry and that we see this as well, is that as you make the networks bigger by adding more neurons, the accuracy of all their predictions increases with the added capacity.
where we trained large neural networks that work very well, but... able to deploy them to the fleet due to computational constraints.
"
It's a transcript so it may not be exactly accurate. Anyway @jimmy_d enjoy this lil' morsel you detective.

Interesting he said "neurons" and not "weights"... was he just trying to translate for a general audience?
unless it has become a low level (and growing) sentient conciousness as an emergent phenomenon due to complexity of connections, of the neurons..... (we tentatively welcome our robot overlords)(ask Alexa, "are you sentient", and notice the question is deflected)
 
  • Funny
Reactions: Sharps97
I guess we'll find out end of Q1 2019 which, after adjusting for Tesla time, results in end of Q3 2019, early Q4 2019.

Great to hear time estimates from people other than Elon

Peter Bannon

Hi, this is Pete Bannon. The Hardware 3 design is continuing to move along. Over the last quarter, we've completed qualification of the silicon, qualification of the board. We started the manufacturing line, in qualification of the manufacturing line. We've been validating the provisioning flows in the factor. We built test versions of Model S, X and 3 in the factory to validate all the fit and finish of the parts and all the provisioning flows.

So we still have a lot of work to do. And the team is doing a great job, and we're still on track to have it ready to go by the end of Q1.

Elon Musk

Great. And that will be on it roughly 1000% increase in processing capability compared to the current hardware. And so, it's obviously giant improvement despite being a - it costs about the same. Cost, volume and power consumption are approximately the same as the current hardware, but it's a ten-fold improvement in frames per second.

Peter Bannon

That's right.

Elon Musk

Yeah, and improved redundancy as well. But very importantly - it's very important emphasize is that the only thing that needs to change between a car that's produced today and a car, let's say, produced in the two second quarter of next year is swapping out the Autopilot computer. And this is a simple change that takes less than half-an-hour in service to upgrade the computer. And so, anyone will be able to upgrade their computer to full self-driving capability or upgrade their car to full self-driving capability with a simple service visit.

So we expect all cars with a Hardware 2 sensor suite, basically anything made in the last roughly two years will be upgradeable to full self-driving.
 
  • Like
Reactions: jimmy_d and EinSV
I must be misreading: You say the need for more HW means the network is inefficient. That means the requirement network is smaller, it also mean you already know the network size needed for FSD. That indicates you have a completed, functional, tested, and proven FSD system. Yet you are not selling it publicly... o_O
(did you factor in 8 cameras at 30 fps each with the NN using two frames at a time?)

yes i did factor in the 8 cameras. I'm simply doing comparisons. I also just do the math compared to the hyperbole hype stuff that elon generates.

"the world's most advanced computer specifically for autonomous operation."

Then you have ARK Invest simply regurgitating and saying Tesla is "three years ahead on autonomous hardware" because of the new chip.

So I pay close attention to statements like "Cost, volume and power consumption are approximately the same as the current hardware, but it's a ten-fold improvement in frames per second."

That puts the new chip at around 125 Watts. Multiple Mobileye EyeQ5 for example gives you 100 TFLOP with only 40 watts consumption. I could go on and on but i digress. What's mostly the case is that HW3 chip isn't 100TFLOP and probably ~25 TFLOP (in that case eyeq5 needs only 10 Watts). If thats the case then that has to be one of the worst designed ASIC chip in terms of TDP. There is a reason why tesla haven't said anything about the tlop.
 
Last edited:
  • Helpful
Reactions: mongo
yes i did factor in the 8 cameras. I'm simply doing comparisons. I also just do the math compared to the hyperbole hype stuff that elon generates.

So I pay close attention to statements like "Cost, volume and power consumption are approximately the same as the current hardware, but it's a ten-fold improvement in frames per second."

That puts the new chip at around 125 Watts. Multiple Mobileye EyeQ5 for example gives you 100 TFLOP with only 40 watts consumption. I could go on and on but i digress. What's mostly the case is that HW3 chip isn't 100TFLOP and probably ~25 TFLOP (in that case eyeq5 needs only 10 Watts). If thats the case then that has to be one of the worst designed ASIC chip in terms of TDP. There is a reason why tesla haven't said anything about the tlop.

I'm not sure it's fair at this point to say one is better / not as good as the other as we aren't sure what process node Tesla's chip will be on (14 nm, 12 nm, or 7 nm) and how they architected it (high clock speed or very wide with lower clocks). We also don't know what Elon meant by redundancy (is it just one chip per board or two?)

Since Jim Keller was largely responsible for the design, it is safe to assume that it is an exceptionally well designed part that hit all of its design parameters and should be relatively easy to manufacture. However, given that Tesla will require a foundry to fab it, volume and die sizes will play large roles in overall costs associated with this. I'm assuming that Tesla is going with TSMC. I'd expect MobilEye to go with Intel's own fabs (but maybe TSMC also).
 
  • Like
Reactions: Anner J. Bonilla
I'm not sure it's fair at this point to say one is better / not as good as the other as we aren't sure what process node Tesla's chip will be on (14 nm, 12 nm, or 7 nm) and how they architected it (high clock speed or very wide with lower clocks). We also don't know what Elon meant by redundancy (is it just one chip per board or two?)

Since Jim Keller was largely responsible for the design, it is safe to assume that it is an exceptionally well designed part that hit all of its design parameters and should be relatively easy to manufacture. However, given that Tesla will require a foundry to fab it, volume and die sizes will play large roles in overall costs associated with this. I'm assuming that Tesla is going with TSMC. I'd expect MobilEye to go with Intel's own fabs (but maybe TSMC also).

We sure can. Eyeq4 is currently in production in multiple cars and has 2.5 TFLOPS while running on 3 Watts. It processes a more vast variety of NN compared to anything Tesla has with far better accuracy on 8+ cameras at 36 frames per second.
 
yes i did factor in the 8 cameras. I'm simply doing comparisons. I also just do the math compared to the hyperbole hype stuff that elon generates.

"the world's most advanced computer specifically for autonomous operation."

Then you have ARK Invest simply regurgitating and saying Tesla is "three years ahead on autonomous hardware" because of the new chip.

So I pay close attention to statements like "Cost, volume and power consumption are approximately the same as the current hardware, but it's a ten-fold improvement in frames per second."

Its not where near..

That puts the new chip at around 125 Watts. Mobileye EyeQ5 for example gives you 100 TFLOP with only 40 watts consumption. I could go on and on but i digress. What's mostly the case is that HW3 chip isn't 100TFLOP and probably ~25 TFLOP (in that case eyeq5 needs only 10 Watts). If thats the case then that has to be one of the worst designed ASIC chip in terms of TDP. There is a reason why tesla haven't said anything about the tlop.

If I'm reading correctly, you are making an assumption about the chips TFLOPs, carrying over the estimated power consumption, and then saving it's inefficient. Is that right?

If this chip is custom designed for Tesla's NN, then would TFLOP even be a relevent metric? (does it need floating point?)
A line shifter for coefficients paired with multiple MAC units would parallelize the crap out of a CNN, but be useless outside of DSP filters...
 
If I'm reading correctly, you are making an assumption about the chips TFLOPs, carrying over the estimated power consumption, and then saving it's inefficient. Is that right?

If this chip is custom designed for Tesla's NN, then would TFLOP even be a relevent metric? (does it need floating point?)
A line shifter for coefficients paired with multiple MAC units would parallelize the crap out of a CNN, but be useless outside of DSP filters...
TOPS are not TFLOPS ;)

Trillions of (usually 8-bit integer for NN) Operations Per Second vs Trillions of Floating-point Operations Per Second

I'm sure he meant TOPS not TFLOPS and just got hung up on the more common term. OR should have meant it ;)
 
  • Like
Reactions: mongo