Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
trompe-jpeg.257253
Corner case
 
I'd drive through that. Looks like a fun tunnel.

As a corollary, does AP1 get fooled by fake signs? Like if I painted a 35mph limit sign? Would hw2 become easily fooled by a painted stop sign? How to get a NN to separate realistic vs real?

AP1 reads the speed limits from the back of the trucks. It's very annoying.
 

Attachments

  • three-speed-limit-signs-picture-km-back-truck-39878201.jpg
    three-speed-limit-signs-picture-km-back-truck-39878201.jpg
    88.3 KB · Views: 102
Would hw2 become easily fooled by a painted stop sign? How to get a NN to separate realistic vs real?

There’s a recent paper on this: Standard detectors aren't (currently) fooled by physical adversarial stop signs

Abstract:

An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. If adversarial examples existed that could fool a detector, they could be used to (for example) wreak havoc on roads populated with smart vehicles. Recently, we described our difficulties creating physical adversarial stop signs that fool a detector. More recently, Evtimov et al. produced a physical adversarial stop sign that fools a proxy model of a detector. In this paper, we show that these physical adversarial stop signs do not fool two standard detectors (YOLO and Faster RCNN) in standard configuration. Evtimov et al.'s construction relies on a crop of the image to the stop sign; this crop is then resized and presented to a classifier. We argue that the cropping and resizing procedure largely eliminates the effects of rescaling and of view angle. Whether an adversarial attack is robust under rescaling and change of view direction remains moot. We argue that attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - likely makes it difficult to make adversarial patterns. Finally, an adversarial pattern on a physical object that could fool a detector would have to be adversarial in the face of a wide family of parametric distortions (scale; view angle; box shift inside the detector; illumination; and so on). Such a pattern would be of great theoretical and practical interest. There is currently no evidence that such patterns exist.

Here’s two videos posted on Facebook by one of the authors, although I honestly can’t tell what they’re supposed to show. :p
 
Last edited:
  • Helpful
Reactions: croman
@verygreen or anyone else with some mad code knowledge. I would be very grateful if you can comment on two observations:

1) Is there any reason to suggest why .42 is having more trouble with hills and crests than previous versions?

2) Also, in areas with no cell signal (and I mean no cell phone coverage), I noticed a lot more difficulty with regular lane keeping where other versions seemed like they would have had no problem, like the maps data is not detailed or some of these roads that there is no gps data from previous Teslas whatsoever. These roads are very remote but have excellent lane markings.
 
  • Informative
Reactions: calisnow
You would occasionally see poor lane keeping with AP1 on roads with no cell service. The car was supposedly downloading a form of HD map when it could that helped with lane keeping. That said I don’t know if it ever actually was doing so... Haven't been anywhere with bad cellular in my AP2 car yet.

I removed the sim card from a Tesla with AP 2.0, (effective breaking the LTE.) And this did not seem to have any affect on autopilot performance as far as I could tell. However, of course, hard to say for sure.
 
2) Also, in areas with no cell signal (and I mean no cell phone coverage), I noticed a lot more difficulty with regular lane keeping where other versions seemed like they would have had no problem, like the maps data is not detailed or some of these roads that there is no gps data from previous Teslas whatsoever. These roads are very remote but have excellent lane markings.
You would occasionally see poor lane keeping with AP1 on roads with no cell service. The car was supposedly downloading a form of HD map when it could that helped with lane keeping. That said I don’t know if it ever actually was doing so... Haven't been anywhere with bad cellular in my AP2 car yet.

This really is a good question. @verygreen recently pointed out to a database where, if I understood him correctly, he thought no actual whitelisting of ghost braking targets was going on, but a database to that effect did exist? Even though this concept was already introduced in the summer of 2016 after the Brown incident in the Tesla blog... Same with HD mapping for lane keeping, many times discussed by Tesla. However, the current concensus seems to be there may not be any of that going on yet... right? Are these just Tesla's medium-term goals as someone put it, not really things the code is doing at this time?

So it is possible our experiences - good and bad - are anecdotal only and nothing to do with local learning or whitelisting or mapping? As you know, my roadtrip earlier this month on .36 was a surprisingly good one. It really was. I chalked it up to it being on a road with a Supercharger, so the logic goes a lot of Teslas drive there, but frankly I am not sure about that at all. I wonder if it had anything to do with it. Maybe the conditions that day were just better for some reason (I was driving more in the dark, maybe that helped it see white, lit lane markings) instead. Maybe it was all the NN doing its job well that day. There is another stretch of motorway I drive more often on that has been problematic on .36, e.g. zig-zaggings and ghost-brakings (I did one stint on it on .40 as reported and that was fine though, but then .36 was fine on some days too...).

We also have many very positive reports e.g. from California on TMC and then others who live outside of it having more bad reports. That could suggest some form of learning (lots of Teslas in California), but @verygreen do we actually have any evidence of HD mapping or whitelisting or any kind of learning going on in the AP2 code? Or are we just imagining things to explain differences in our experiences, which clearly do differ from time to time, from road to road?
 
Last edited:
AP1 reads the speed limits from the back of the trucks. It's very annoying.

Wow! This has never happened to me. I'm so happy with my AP1. In Norway it reads traffic signes very well. Both regular signs and temporary (construction etc. - different colour).

Had a trip to Switzerland this summer and drove through Sweden, Denmark, Germany and Switzerland. No problem with signs and in Germany it could even adjust the speed to the time restriction on the signes (different speed at different times of the day).
 
  • Informative
Reactions: AnxietyRanger
Wow! This has never happened to me. I'm so happy with my AP1. In Norway it reads traffic signes very well. Both regular signs and temporary (construction etc. - different colour).

Had a trip to Switzerland this summer and drove through Sweden, Denmark, Germany and Switzerland. No problem with signs and in Germany it could even adjust the speed to the time restriction on the signes (different speed at different times of the day).

I have AP2 since May this year. But until May AP1 has read speed limit from almost everywhere, back of the trucks, parallel streets, car parks entry etc. It may got better after May but AP2 solves this problem, it doesn't read the speed limit signs. :)