[Yeah but snapshooting randomly with all cams, even when parked, as I understand it
Yes, but at this time, it's not used for AP2 implementation, just learning and training data. I think.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
[Yeah but snapshooting randomly with all cams, even when parked, as I understand it
But does AP1 even need the street sign images? I don't think it does anything with them or ever displays it (the human I've seen in some pedestrian detection test videos by Kman). It seems like something they thought they'd do but jettisoned when they couldn't.
I think the real question is why the stop sign image was showing in the shadow mode release of AP2 but hasn't appeared since. Was that Nvidia code that they somehow failed to hide through commenting? It is just intriguing that the stop sign image was so clearly part of that AP2 initial build but its been 7 months and nothing since. Throwing up a stop sign in the IC, if TeslaVision can do that, would be a great safety add on for everyone regardless of buying EAP/FSD. FSD, obviously, would act on that information but they show the speed limits for everyone, why not other signs?
I think the real question is why the stop sign image was showing in the shadow mode release of AP2 but hasn't appeared since. Was that Nvidia code that they somehow failed to hide through commenting? It is just intriguing that the stop sign image was so clearly part of that AP2 initial build but its been 7 months and nothing since. Throwing up a stop sign in the IC, if TeslaVision can do that, would be a great safety add on for everyone regardless of buying EAP/FSD. FSD, obviously, would act on that information but they show the speed limits for everyone, why not other signs?
Personally, I'm very curious to see what shows up tomorrow at the Model 3 unveil. There's been so much speculation about the 'second part' of the unveil,
6) Tesla's timeline for FSD feature advertising has been weird - I can't imagine they'd write things on their website that they have no intention of delivering, and relatively soon. I know this might seem strange, given their track record,...so they don't need to make wild claims to sell cars. They must have this technology working somewhere in order to be confident enough to market it upfront.
Additionally, once maps become a part of the equation, everything changes and our AP2 cars suddenly become very smart. However, I can't believe the maps generation is done yet, but I can believe that the most complete ones probably exist for the US. We know that human readable Mapbox maps exist, only for the US, and the machine-readable drivable rails maps exist, but only for Fremont at this time.
So, I think we'll find out a lot about FSD tomorrow
The only thing we will see is another elaborate AP demo in order to drive sales.
I expect some kind of wow magic from Elon. It might be hype but the guy is a master hypester. I hope its AP2 related because at least I'd benefit from that eventually.
I'd like a little more concrete and a little less hype IMHO. But I suspect the WOW feature of the m3 will be the range of the car. Hopefully I'm wrong, I really hope I'm wrong.
I know this might seem strange, given their track record, but what's their incentive -they already have the best EV on the market, so they don't need to make wild claims to sell cars. They must have this technology working somewhere in order to be confident enough to market it upfront.
Beyond that, one other thing: HD maps. AP2 needs HD maps to raise the confidence of stop sign usage significantly. There are a number of stop signs in my neighborhood that are hidden behind trees/bushes until the very last moment (too late to slow down and stop if you're going too fast). Humans know the stop sign exists because you can intuitively feel it and also vaguely see the "STOP" on the road very distorted. Vision could also theoretically see this on the ground but its not reliable.
Therefore, AP2 needs HD maps so it knows that there is an intersection coming up that has a stop sign, whether you see it or not. This just adds to the safety. @verygreen has noted that each recent release (including the most recent 17.28) has been adding more and more "map downloading" code to the APE. I believe this is the framework for HD map downloading.
from what I see, once hw2.5 references appeared, additionally references to a dual node (not just gpu) setup have appeared as well at the same time, which does not sound like it's just a pure coincidence to me.
sensors are external to APE, but the code that inits them is mostly the same (but not exactly).
Model 3 has a different connection backup camera obviously, but even for non model 3 HW2.5 backup camera is connected differently.
Good thoughts. One thing on the data size problem of HD maps. Mobileye claims to have solved this problem - there are several publicly available talks given by their CEO in the last year describing their approach to low-data, crowd-sourced HD maps. The basic approach is to use visual camera confirmation of known physical objects at the side of the road to lock on a position. So a billboard is one example - all the cars know that a billboard is located at some particular place. You need only that one single object's location and description in your HD map for it to be effective. The computer is "looking for" that billboard and when the cameras spot it they process your location based on the frame-by-frame changing size and angle of the billboard in the camera's view. This is super low-bandwidth and solves the problem (at least when it isn't super foggy out).I’d like to offer my (completely theoretical) disagreement to the necessity of HD maps.
First off, AP would have to rely on a live, OTA update process loading tiles ahead, since downloading an HD map of the entire US, let alone the world, to current in-car data drives *seems* impossible. In the OTA scenario, however... well, even where I live in NJ, I wouldn’t trust ATT’s service while hurtling at 80mph down the highway with my life.
More importantly, I think there’s a more feasible and data-efficient alternative. As everyone knows, we drive our AP2 Teslas in shadow mode. This of course means they’re recording large amounts of data during events where AP would have done something significantly different from what the driver ended up doing. It seems fairly simple to process the event through NN and then tag the location with an updated response for AP to follow.
To use your example, if a Tesla driver stops before sight of the stop sign, and AP predicted it wouldn’t have stopped, the driving behavior and AP sensor data would be recorded and sent to the NN for significant processing. Of course, AP would eventually still recognize the sign when close enough, and this data may be sent as well as part of the event. Eventually, I’d assume, Tesla will have upgraded a simple “google” map of the US (or world) to include data telling AP how to behave when at a specific, previously tagged location. In all other situations, shadow mode would have repeatedly verified that AP would react correctly, so no additional data from a beefed up map is necessary.
Sorry for spilling my brain into a blog post, and don’t take my ramblings to heart.
Bosch and TomTom have come together to create high-resolution road maps based on radar signals. The product of the two companies' collaboration, a system called "radar road signature," is a move towards automated driving.
This is probably what Tesla will do/are doing.
No because that requires using lidar for ground truth.
No, it doesn't.