Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
This really is a good question. @verygreen recently pointed out to a database where, if I understood him correctly, he thought no actual whitelisting of ghost braking targets was going on, but a database to that effect did exist? Even though this concept was already introduced in the summer of 2016 after the Brown incident in the Tesla blog... Same with HD mapping for lane keeping, many times discussed by Tesla. However, the current concensus seems to be there may not be any of that going on yet... right? Are these just Tesla's medium-term goals as someone put it, not really things the code is doing at this time?

Musk straight up said on tape and on record way back in 2015 that HD mapping was the only way they got the 405 freeway in LA to work with AP1.

Look - I have lived in Europe and traveled there many times. If you have not been to Los Angeles you can't imagine how crappy the freeways are vs Europe.

It is no anecdote that AP1 was UNUSABLE driving west into the setting sun on the 10 freeway in many areas where the glare and the worn out concrete and lane markings made lanes invisible. I had to shut it off because it would wander wildly.

But it got better - and better - and better. Now it is rock solid, like on rails, on the same roads.

Tesla did not develop the object recognition and lane marking recognition - that came from Mobileye.

Somehow they got AP1, over time, to track those lanes. You tell me how they did it if it was not the mapping they explicitly told the world they were using.

This, btw, is why I think MBZ's claims that they made early versions of drive pilot vague and unstable ON PURPOSE are a load of malarkey. I think MBZ Audi and Volvo simply didn't have HD mapping and didn't have any fleet learning the last few years. They were stuck with whatever canned lane recognition they had purchased or built.

Volvo's CEO had an incident in LA a couple years back where he lost his temper in front of a reporter doing a demo drive of Pilot Assist. It kept losing the lanes and embarrassing the executive (Tesla was doing great at that time). He shouted out "Why don't you paint your bloody roads?"
 
And it's not my imagination that way out in the middle nowhere where I am now that my AP2 car has:

1 - learned to take completely unmarked sharp left hand rural road curves smoothly over the course of several releases. AP1 would have ended in the ditch - I know because I rented one for a month while waiting for my own car to arrive.

2 - Learned more and more reliably over the last few builds not to dive into oncoming traffic at a particular break in a divided country road. Three builds ago it ALWAYS dove for incoming traffic. Now it never does - ever.

I am the only Tesla for miles and miles. If maps were built my car did it. If reinforcement learning happened my car's vids helped it.

3 - All of a sudden .40 made the freeway here - also no Teslas but mine for miles - like a train on rails during sections as bad as anything in Los Angeles.

Number 3 points to something other than HD maps perhaps - unless my car alone tea Eling this freeway over the months has built lane maps of this stretch of I-39.
 
Musk straight up said on tape and on record way back in 2015 that HD mapping was the only way they got the 405 freeway in LA to work with AP1.

Look - I have lived in Europe and traveled there many times. If you have not been to Los Angeles you can't imagine how crappy the freeways are vs Europe.

OK, good data point.

If this applies to AP2 too, this would certainly explain why it behaves differently depending on the road. This could explain vast differences based on geographical location... The question remains: Where does it do this and where do we just imagine it. :) That would be interesting to find out...

@verygreen any pointers in the code where/if AP2 uses mapping to help auto-steering today?
 
Musk straight up said on tape and on record way back in 2015 that HD mapping was the only way they got the 405 freeway in LA to work with AP1.

Look - I have lived in Europe and traveled there many times. If you have not been to Los Angeles you can't imagine how crappy the freeways are vs Europe.

It is no anecdote that AP1 was UNUSABLE driving west into the setting sun on the 10 freeway in many areas where the glare and the worn out concrete and lane markings made lanes invisible. I had to shut it off because it would wander wildly.

But it got better - and better - and better. Now it is rock solid, like on rails, on the same roads.

Tesla did not develop the object recognition and lane marking recognition - that came from Mobileye.

Somehow they got AP1, over time, to track those lanes. You tell me how they did it if it was not the mapping they explicitly told the world they were using.

This, btw, is why I think MBZ's claims that they made early versions of drive pilot vague and unstable ON PURPOSE are a load of malarkey. I think MBZ Audi and Volvo simply didn't have HD mapping and didn't have any fleet learning the last few years. They were stuck with whatever canned lane recognition they had purchased or built.

Volvo's CEO had an incident in LA a couple years back where he lost his temper in front of a reporter doing a demo drive of Pilot Assist. It kept losing the lanes and embarrassing the executive (Tesla was doing great at that time). He shouted out "Why don't you paint your bloody roads?"

I've been very impressed with the ability of AP1 in recent firmware updates to handle poor lane markings and intersections on dry pavement.

If they've been using HD maps to do that, though, why was it so much worse in the pouring rain on Sunday?

I was on a very well traveled interstate connector (I-287) and with the haze cars were kicking up blurring the lane markings, it was potentially dangerous, jerking the wheel sharply at longer intervals - to the point that I decided I was better off without it with the limited traction.

That makes perfect sense if it was having even more trouble seeing the lines than I was, but little sense if it had HD map tiles to guide it.
 
  • Informative
Reactions: calisnow
I'd drive through that. Looks like a fun tunnel.

As a corollary, does AP1 get fooled by fake signs? Like if I painted a 35mph limit sign? Would hw2 become easily fooled by a painted stop sign? How to get a NN to separate realistic vs real?

This brings up an interesting problem for all autonomous systems from all manufacturers. As autonomous cars start to hit the road in more numbers, pranksters will start messing with road signs to confuse the AIs. Initially it will be kids pulling pranks, but like computer viruses which in the beginning were mostly nuisance things have become a major underground industry today.

Some hackers have shown they could over ride critical driving systems on some cars remotely (mostly Jeeps), but what if hackers start over-riding autonomous cars? If car have gotten to the point where a driver can't take over (no driver controls), then the hacker can do whatever they want with the car.

There was a show a couple of years back that took place in the near future where most cars were electric. They used some Teslas and BMW i3s. I think it was called Extant or something like that. I watched the first season and it was one of those shows that was barely holding together by the end of the first season and I quit after it jumped the shark early in the first season. Anyway, a major character was killed off when the autonomous taxi he was riding in was hacked and stopped on train tracks and then locked the doors.

I figure highway safety standards being what they are, cars today need to be escapable by mechanical means, and I doubt that would go away with autonomous cars, but there are lots of other ways to kill people with a car. Drive the car full speed on a highway and steer it into a bridge support for example.

All it takes is a hack to be found in one brand of car's autonomous systems to cause a lot of problems.

But pranksters can also cause havoc by putting a fake stop sign in the middle of a block in the middle of the night. And how will autonomous cars react to optical illusion art like that fake archway? There are some street artists who make quite realistic looking sidewalk chalk art. The optical system of the car sees the world very differently than we do, but there are some very technically savvy artists out there who could figure out what would mess with autonomous sensors and create some kind of art that wouldn't fool a human, but would tie an autonomous system into knots.

Eventually the only way around the hackers may be to map the drivable world down to the centimeter and all cars need to access the database. It would probably require governments or some kind of industry body to do the work and everyone would have access to the same data. I believe Elon has also talked about putting up a fleet of new GPS satellites that would be much more accurate than current commercial GPS systems and cars would know exactly where they are in the lane from GPS data.

The coming thing is 3D NAND SSD drives. Currently the largest available is 2 TB, but 8-10TB will be along soon. Cars might all need huge drives to hold all the data by the time they are done.

I've worked in real time and embedded software development most of my career. Full autonomous driving is a major undertaking. Quite possibly the most complex mass market software project ever accomplished. Getting to a system that works well 80% of the time is already giving Tesla fits, but it's the last 20% that's going to be the real challenge. Dealing with fake signs, street art, street signs that are painted over, other drivers doing unexpected things.

And to be truly autonomous, it needs redundancy, and needs to be right 100% of the time. In aerospace critical decision making systems have three different processors running three different programs all written to the same specification (by different teams). When a decision needs to be made, all three processors crunch the problem and all three should come to the same conclusion, but the best two out of three wins. That's fantastically expensive to do, not just from a hardware point of view, but you also need three teams of programmers, and a ton of testing before going live. When I was at Boeing, I worked in a lab that did that kind of testing. My organization just made the test equipment and test software suites, other people did the actual testing. For the 777 program we went from a small part of a building in Renton to a massive building at Boeing Field and we filled the new building completely. It was 1/4 mile from my desk to the lab and I never left the building.

Thinking about how to do autonomous cars makes my head spin. It is doable with current technology, but it isn't an easy task to tackle.
 
But pranksters can also cause havoc by putting a fake stop sign in the middle of a block in the middle of the night. And how will autonomous cars react to optical illusion art like that fake archway? There are some street artists who make quite realistic looking sidewalk chalk art. The optical system of the car sees the world very differently than we do, but there are some very technically savvy artists out there who could figure out what would mess with autonomous sensors and create some kind of art that wouldn't fool a human, but would tie an autonomous system into knots.

A slight counterpoint to your concern... It's not an inexpensive proposition to make a quality fake of a stop sign. A tree branch with a paper octagon, I don't think is going to cut it. A technically savvy artist (to borrow your description) has to put in a ton of work to make a convincing fake: https://gizmodo.com/how-one-fed-up-dude-fixed-an-awful-highway-sign-himself-1686373438

I'm not sure it is going to be any more of a problem than the age old prank of dragging a bunch of trash cans into the street as a barricade.
 
  • Informative
Reactions: GoTslaGo
9C7F3CD6-B182-45A4-BD32-C6DAE2437613.jpeg
Where is @verygreen, let’s call him, we need help!
 
@verygreen or anyone else with some mad code knowledge. I would be very grateful if you can comment on two observations:

1) Is there any reason to suggest why .42 is having more trouble with hills and crests than previous versions?

2) Also, in areas with no cell signal (and I mean no cell phone coverage), I noticed a lot more difficulty with regular lane keeping where other versions seemed like they would have had no problem, like the maps data is not detailed or some of these roads that there is no gps data from previous Teslas whatsoever. These roads are very remote but have excellent lane markings.

I put 1000-odd miles on 42 over the last 5 days and spent most of the time looking for differences between it and .40 (and 38, which I had just before 40). I drove cities, urban freeway, lots of rural limited access highway. And I did a lap through Sonoma/Napa wine country which has a pretty good mix of stuff, including twisty mountain roads. My general sense is that it's an incremental improvement over 40, but that the really ugly failures haven't improved at all.

Looking at the NN they added 3 channels of full frame output that are collectively labeled lane1.5. From the name I'm guessing it's an additional lane mapping feature. It could be running in shadow. If they are making lane decisions based on some new vision interpretation feature then it could lead to new corner case problems with lanes.

I've noticed the hill cresting lane issues before (in 42, 40, and 38), but I also know them from AP1.
 
This really is a good question. @verygreen recently pointed out to a database where, if I understood him correctly, he thought no actual whitelisting of ghost braking targets was going on, but a database to that effect did exist? Even though this concept was already introduced in the summer of 2016 after the Brown incident in the Tesla blog... Same with HD mapping for lane keeping, many times discussed by Tesla. However, the current concensus seems to be there may not be any of that going on yet... right? Are these just Tesla's medium-term goals as someone put it, not really things the code is doing at this time?

So it is possible our experiences - good and bad - are anecdotal only and nothing to do with local learning or whitelisting or mapping? As you know, my roadtrip earlier this month on .36 was a surprisingly good one. It really was. I chalked it up to it being on a road with a Supercharger, so the logic goes a lot of Teslas drive there, but frankly I am not sure about that at all. I wonder if it had anything to do with it. Maybe the conditions that day were just better for some reason (I was driving more in the dark, maybe that helped it see white, lit lane markings) instead. Maybe it was all the NN doing its job well that day. There is another stretch of motorway I drive more often on that has been problematic on .36, e.g. zig-zaggings and ghost-brakings (I did one stint on it on .40 as reported and that was fine though, but then .36 was fine on some days too...).

We also have many very positive reports e.g. from California on TMC and then others who live outside of it having more bad reports. That could suggest some form of learning (lots of Teslas in California), but @verygreen do we actually have any evidence of HD mapping or whitelisting or any kind of learning going on in the AP2 code? Or are we just imagining things to explain differences in our experiences, which clearly do differ from time to time, from road to road?


I always assumed that our experiences are only just anecdotal, and that there is no local learning or whitelisting or mapping.

I have seen no evidence of local learning / mapping etc.

Except for this recent schema verygreen has mentioned.
 
I always assumed that our experiences are only just anecdotal, and that there is no local learning or whitelisting or mapping.

I have seen no evidence of local learning / mapping etc.

Except for this recent schema verygreen has mentioned.
I remember reading about local map tiles being downloaded a long time ago (AP1?). Are those nonexistent? I should also note that Tesla only showed an example of LA having them, so other countries and areas might not have them.
 
  • Informative
Reactions: AnxietyRanger
Musk straight up said on tape and on record way back in 2015 that HD mapping was the only way they got the 405 freeway in LA to work with AP1.

Look - I have lived in Europe and traveled there many times. If you have not been to Los Angeles you can't imagine how crappy the freeways are vs Europe.

It is no anecdote that AP1 was UNUSABLE driving west into the setting sun on the 10 freeway in many areas where the glare and the worn out concrete and lane markings made lanes invisible. I had to shut it off because it would wander wildly.

But it got better - and better - and better. Now it is rock solid, like on rails, on the same roads.

Tesla did not develop the object recognition and lane marking recognition - that came from Mobileye.

Somehow they got AP1, over time, to track those lanes. You tell me how they did it if it was not the mapping they explicitly told the world they were using.

This, btw, is why I think MBZ's claims that they made early versions of drive pilot vague and unstable ON PURPOSE are a load of malarkey. I think MBZ Audi and Volvo simply didn't have HD mapping and didn't have any fleet learning the last few years. They were stuck with whatever canned lane recognition they had purchased or built.

Volvo's CEO had an incident in LA a couple years back where he lost his temper in front of a reporter doing a demo drive of Pilot Assist. It kept losing the lanes and embarrassing the executive (Tesla was doing great at that time). He shouted out "Why don't you paint your bloody roads?"


Basic 2D mapping vs HD 3D mapping

camera-based localization vs GPS localization

And it's not my imagination that way out in the middle nowhere where I am now that my AP2 car has:

1 - learned to take completely unmarked sharp left hand rural road curves smoothly over the course of several releases. AP1 would have ended in the ditch - I know because I rented one for a month while waiting for my own car to arrive.

2 - Learned more and more reliably over the last few builds not to dive into oncoming traffic at a particular break in a divided country road. Three builds ago it ALWAYS dove for incoming traffic. Now it never does - ever.

I am the only Tesla for miles and miles. If maps were built my car did it. If reinforcement learning happened my car's vids helped it.

3 - All of a sudden .40 made the freeway here - also no Teslas but mine for miles - like a train on rails during sections as bad as anything in Los Angeles.

Number 3 points to something other than HD maps perhaps - unless my car alone tea Eling this freeway over the months has built lane maps of this stretch of I-39.

1) unmarked as in no lane lines at all?? Thats impressive.

anyways, could be the result of basic 2D maps with GPS, or just a result of the control software getting better,

2) result of better software in each update,
I see no evidence that your car's video made this better.

3) I am not sure what you are saying here.


I still see no evidence of HD mapping, and of anything along the lines of local learning. As in "your tesla will get better over time on the roads you drive it on."

I have no doubt that AP1 and AP2 are getting better. With each new firmware I imagine the perception/sensing and the control software is getting better. And also Tesla has said they are levering GPS to build maps that enhance AP performance.
 
  • Like
Reactions: S4WRXTTCS
Basic 2D mapping vs HD 3D mapping

camera-based localization vs GPS localization



1) unmarked as in no lane lines at all?? Thats impressive.

anyways, could be the result of basic 2D maps with GPS, or just a result of the control software getting better,

2) result of better software in each update,
I see no evidence that your car's video made this better.

3) I am not sure what you are saying here.


I still see no evidence of HD mapping, and of anything along the lines of local learning. As in "your tesla will get better over time on the roads you drive it on."

I have no doubt that AP1 and AP2 are getting better. With each new firmware I imagine the perception/sensing and the control software is getting better. And also Tesla has said they are levering GPS to build maps that enhance AP performance.

There is absolutely no, zero, zilch local learning going on. It's just not happening folks. ALL learning happens at the mothership, and the driving today is completely algorithm based, and not the NN driving, the NN is simply doing object recognition at this time. The better performance with AP is due to better driving algorithm's and better object recognition by the NN.
 
There is absolutely no, zero, zilch local learning going on. It's just not happening folks. ALL learning happens at the mothership, and the driving today is completely algorithm based, and not the NN driving, the NN is simply doing object recognition at this time. The better performance with AP is due to better driving algorithm's and better object recognition by the NN.
Any thoughts as to why we don't get vehicle type identification on the screen yet (i.e. trucks, motorcycles, etc)? My theory is that there was some patented IP with MobileEye that Tesla can't reproduce, but I haven't seen it spoken about elsewhere other than it "still isn't there".
 
Any thoughts as to why we don't get vehicle type identification on the screen yet (i.e. trucks, motorcycles, etc)? My theory is that there was some patented IP with MobileEye that Tesla can't reproduce, but I haven't seen it spoken about elsewhere other than it "still isn't there".

Or maybe it just isn't a priority for them, since as long as it detects, and displays it, as a car they it is good enough that they can work on other things that have a higher priority. (Like speed sign recognition?)
 
There is absolutely no, zero, zilch local learning going on. It's just not happening folks. ALL learning happens at the mothership, and the driving today is completely algorithm based, and not the NN driving, the NN is simply doing object recognition at this time. The better performance with AP is due to better driving algorithm's and better object recognition by the NN.

Agreed.

Any thoughts as to why we don't get vehicle type identification on the screen yet (i.e. trucks, motorcycles, etc)? My theory is that there was some patented IP with MobileEye that Tesla can't reproduce, but I haven't seen it spoken about elsewhere other than it "still isn't there".


Or maybe it just isn't a priority for them, since as long as it detects, and displays it, as a car they it is good enough that they can work on other things that have a higher priority. (Like speed sign recognition?)

I agree it is not a priority for them.

They probably trained their network with just 1 vehicle class, rather than multiple classes. (motorcycle, truck, car)

I doubt they will have any need to work on this anytime soon. Or ever need to do it.


However, I do believe they will probably do it someday just to appease complaining customers.