Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Any thoughts as to why we don't get vehicle type identification on the screen yet (i.e. trucks, motorcycles, etc)? My theory is that there was some patented IP with MobileEye that Tesla can't reproduce, but I haven't seen it spoken about elsewhere other than it "still isn't there".

I don't know why they aren't showing it, they have the images in the firmware, perhaps it's a confidence thing, perhaps it's a MobileEye thing, but they clearly are seeing the objects, but not sure why they aren't giving it to us, I suspect (speculation) vs 9.0 will have a brand new interface and a more wholistic view around the car with recognized objects, but this is pure pure speculation on my part, and I haven't seen anything in the code to justify my speculation yet.

I just can't see them doing EAP auto lane change if they don't know you they know a car is rapidly approaching on your left on the display, nobody would ever trust it.

I think the first EAP feature we might see with the current implementation is freeway to freeway transitions, but take that with a large grain of salt, because even with .42 my car can't / won't take a true exit ramp without causing a near death experience.
 
Another site had an article today about Elon saying HW updates this year and Level 5.... From what I can tell its a lot of extrapolation from a few tweets from 10/21/2017

Elon Musk: Teslas will already know where we’re going

Tesla already know's where your going if you keep your calendar updated. It also "knows" your patterns, so for example when I get in the car sometimes the nav will show me a traffic incident on my normal route home, it clearly has learned around 4 - 5 pm each day I leave for my home from work. So, they are definately tracking your "common" to / from destinations.
 
Tesla already know's where your going if you keep your calendar updated. It also "knows" your patterns, so for example when I get in the car sometimes the nav will show me a traffic incident on my normal route home, it clearly has learned around 4 - 5 pm each day I leave for my home from work. So, they are definately tracking your "common" to / from destinations.

I agree with that - was more concerned with HW updates as it relates to this thread (and the fact that I ordered on Sunday....)
 
I don't know why they aren't showing it, they have the images in the firmware, perhaps it's a confidence thing, perhaps it's a MobileEye thing, but they clearly are seeing the objects, but not sure why they aren't giving it to us, I suspect (speculation) vs 9.0 will have a brand new interface and a more wholistic view around the car with recognized objects, but this is pure pure speculation on my part, and I haven't seen anything in the code to justify my speculation yet.

I just can't see them doing EAP auto lane change if they don't know you they know a car is rapidly approaching on your left on the display, nobody would ever trust it.

I think the first EAP feature we might see with the current implementation is freeway to freeway transitions, but take that with a large grain of salt, because even with .42 my car can't / won't take a true exit ramp without causing a near death experience.


Wouldn't freeway to freeway transitions require automatic lane changing?

Also, yea I imagine for auto lane changing to work it will need to first activate the blinker wait a few seconds for the human driver to double check and then proceed with the lane change.
 
HD mapping has to be mandatory for auto lane change and slip lane/highway exits. On my local freeway my AP always shows the emergency stopping lanes on either side of the freeway for example as extra lanes. I really can't have the car deciding for itself that those lanes are perfectly fine to drive in.
 
1 - learned to take completely unmarked sharp left hand rural road curves smoothly over the course of several releases. AP1 would have ended in the ditch - I know because I rented one for a month while waiting for my own car to arrive.
If roads had faded line or are unmarked I thought I've seen/read in multiple places that NN techniques can still pick that up because of the "wear" patterns and colorization of the road.

Does this sound familiar / seem reasonable to others? For example, to help navigate an intersection or going around corner better.

A quick example of patterns/colorization of road lanes -- ie. ignore paint and look at the grey shades.
9F2Z8tT.jpg
 
Last edited:
1 - learned to take completely unmarked sharp left hand rural road curves smoothly over the course of several releases. AP1 would have ended in the ditch - I know because I rented one for a month while waiting for my own car to arrive.

Some of those improvements are happening on the AP1 side, too. I've been very impressed with how well it seems to guess at the road when it's telling me it doesn't have any lane lines or cars to follow. It's not perfect yet, but it is much, much better than it was at the beginning of this year.
 
  • Like
Reactions: calisnow
Any thoughts as to why we don't get vehicle type identification on the screen yet (i.e. trucks, motorcycles, etc)? My theory is that there was some patented IP with MobileEye that Tesla can't reproduce, but I haven't seen it spoken about elsewhere other than it "still isn't there".
My thoughts™:

@verygreen has already proven that the graphical representations of the truck, the bike, the red man etc. are all there in CID/IC (software) on AP2. So no problem there.

IC-car.png


Secondly, AP2 seems to react appropriately to whatever vehicle you encounter - be it a Kenworth, a Mini or a Harley. Or a humanoid, for that matter. So the NN must've been trained to the point that it can classify each of those types, and give the longitudinal output necessary. (I mean if it wasn't, I guess we'd know by now...) So check that.

Third - and I'm on thin ice now - Mobileye was AFAIK only responsible for the classifying stuff and passing that data on to Autopilot. I.e. to the magic that lets your power steering ECU, your iBooster and your inverter do their thing. If that's true, then AP2NN has merely replaced that manual labeling effort that Mobileye's minions have been sweating over for 10 years.

In that case, nothing, zero, zilch should prevent Tesla from displaying pretty little pictures of what's in front of you. (In case you have to double check.)

In conclusion, I attribute the lack of this very important capability to either;

(A)
lazy.jpg


(B)
busy.jpg


or

(C)
secret.jpg
 
Last edited:
HD mapping has to be mandatory for auto lane change and slip lane/highway exits. On my local freeway my AP always shows the emergency stopping lanes on either side of the freeway for example as extra lanes. I really can't have the car deciding for itself that those lanes are perfectly fine to drive in.

Have you ever actually tried (where it is safe to do so)? I'm saying this because even when my AP1 car "sees" another lane I absolutely cannot get it to execute a lane change across a solid white line. In my jurisdiction the solid white line indicates that changing lanes is discouraged in that particular stretch, in addition to also being used as the roadway edge.
 
My thoughts™:

@verygreen has already proven that the graphical representations of the truck, the bike, the red man etc. are all there in CID/IC (software) on AP2. So no problem there.

View attachment 257441

Secondly, AP2 seems to react appropriately to whatever vehicle you encounter - be it a Kenworth, a Mini or a Harley. Or a humanoid, for that matter. So the NN must be trained to the point that it can classify each of those types, and give the longitudinal output necessary. (I mean if it wasn't, I guess we'd know by now...) So check that.

Third - and I'm on thin ice now - Mobileye was AFAIK only responsible for the classifying stuff and passing that data on to Autopilot. I.e. to the magic that let's your power steering ECU, your iBooster and your inverter do their thing.

If that's true, then AP2NN has merely replaced that manual labeling effort that Mobileye's minions have been sweating over for 10 years.

In that case, nothing, zero, zilch should prevent Tesla from displaying pretty little pictures of what's in front of you. (In case you have to double check.)

In conclusion, I attribute the lack of this very important capability to either;

(A)
View attachment 257442

(B)
View attachment 257444

or

(C)
View attachment 257443


But if mobileeye owns a patent on the classification of vehicle types for autopilot functionality, then it's a no-go. My guess is this is patent related.
 
If mobileye somehow managed to patent the idea of computer classification of vehicles, I quit.

I quit.
One filed just 60 days ago...

Patent Images

I found that while eating at Chick-fil-A on my lunch hour. I bet there are thousands of patents out there vaguely similar, and mobileye owns many. I'm guessing they threatened legal protection of their patents and Tesla is limited in what they can do- hence EAP as it exists today.
 
Regarding the mapping speculation and past examples of AP1 improving and doing something or other in CA - I don't live in CA so I don't really know about that.
Also AP1 controls are different than AP2 and I only have AP2.
On AP2 the mapping processes are there and they might be even fetching some data from mothership without storing anything on disk,
I have no idea, but probably can gather some data usage statistics while driving somewhere (since three's nothing stored locally on ape,
it's bound to get refetched even if I stay in the same general area, right?).

Additionally it's possible cid fetches some sort of maps (bound to do it for AP1 if that was the case) so this might be tied into whole autopilot too? This is something I really did not look into at all. Possibly part of that whole speed limits database (that I also have not looked into).
I guess I should.

I tried driving with just ape disconnected from internet (easy to do) and I cannot say I saw much change in behavior. Rain certainly makes it feel less stably with more bouncing within teh late, so visual input certainly seems to be at least somewhat important.

@verygreen has already proven that the graphical representations of the truck, the bike, the red man etc. are all there in CID/IC (software) on AP2. So no problem there.
on CID the firmware is the same no matter if it's AP0, AP1, or AP2, so they must have those icons, it does not mean anything about AP2 at all I suspect.
 
HD mapping has to be mandatory for auto lane change and slip lane/highway exits. On my local freeway my AP always shows the emergency stopping lanes on either side of the freeway for example as extra lanes. I really can't have the car deciding for itself that those lanes are perfectly fine to drive in.

Agreed

If roads had faded line or are unmarked I thought I've seen/read in multiple places that NN techniques can still pick that up because of the "wear" patterns and colorization of the road.

Does this sound familiar / seem reasonable to others? For example, to help navigate an intersection or going around corner better.

A quick example of patterns/colorization of road lanes -- ie. ignore paint and look at the grey shades.

This is correct a a CNN can pick up all kinds of subtle things.

My thoughts™:

@verygreen has already proven that the graphical representations of the truck, the bike, the red man etc. are all there in CID/IC (software) on AP2. So no problem there.

Secondly, AP2 seems to react appropriately to whatever vehicle you encounter - be it a Kenworth, a Mini or a Harley. Or a humanoid, for that matter. So the NN must've been trained to the point that it can classify each of those types, and give the longitudinal output necessary. (I mean if it wasn't, I guess we'd know by now...) So check that.

Third - and I'm on thin ice now - Mobileye was AFAIK only responsible for the classifying stuff and passing that data on to Autopilot. I.e. to the magic that lets your power steering ECU, your iBooster and your inverter do their thing. If that's true, then AP2NN has merely replaced that manual labeling effort that Mobileye's minions have been sweating over for 10 years.

In that case, nothing, zero, zilch should prevent Tesla from displaying pretty little pictures of what's in front of you. (In case you have to double check.)

Tesla could just be lazy and not removed the images.

"AP2 seems to react appropriately to whatever vehicle you encounter" Really? How does it act differently? Honest question.
 
Language barrier. What I meant was NN obviously understands it's a truck and not a plastic bag, right? And so it must wrt a car, a bike or a person. Am I far off here?


Ah no I think AP2 NN simple is trained to think, is this an a vehicle I shouldn't hit, or is this not a vehicle I shouldn't hit.

Or even just is this an object I shouldn't hit? Or this not an object I shouldn't hit.


So basally in the training set, all motorcycles, trucks, cars, etc are all marked as the same class. This makes it easier to train and higher performance, but the down side is it cannot distinguish different types.


Update: Then again... from jimmy_d analysis of the NN. It actually seems the network is just outputting 16 image filters or something like that right. And in that case, I imagine from these filters they somehow extract objects using non AI methods, and cannot determine what kind of object they are.