I actually think you're wrong here neroden.
Well, you can *think* that, but in fact I'm right and you're wrong.
The way most of the autonomous driving system works is via machine learning, where it learns what the desired output (vehicle control actions) is for a given set of inputs (radar, cameras, etc), over time, with guidance from humans teaching it - this is a big part of what the cars are doing when they're in shadow mode, or otherwise being piloted by their human, is recording how the human driver reacted to that set of inputs.
It's a statistical correlation system, yes. Based on common data patterns. It does badly if you haven't fed it the right data on the weird situations.
It will NEVER have the level of context-sensitivity that a human is capable of acquiring from a human's years of experience. It will always be an idiot savant.
I can spot signs that something is wrong up ahead which are based on my *general knowledge*, not my driving knowledge. The autopilot will never *have* that general knowledge, because it will never acquire that data. We are a very long way from true AI.
Since the autopilot system has superhuman sensory perception of the world around it - radar can see through snow better than you can, you don't have a GPS in your head accurate to a metre or so, and you don't have eyes in the back and sides of your head - it necessarily follows that it is taking into account a more accurate picture of the world around it to base decisions on.
No, actually, it doesn't. It's missing ludicrous amounts of context which humans get from "general knowledge".
To use your example - nobody has to program, or teach it to drive in those conditions by using mailboxes as a visual cue for where the road is, it will simply learn to do that as it watches what humans do in those situations and sees mailboxes along the side of the path it's following in a field of white.
This could work if it were being trained on the right data. It *could*.
Unfortunately -- and here's the killer point -- the majority of humans are bad drivers and will simply go off the road in these conditions. (And in other conditions, humans won't follow the mailboxes.) The autopilot is being trained by looking at the behavior of typical drivers, which means BAD drivers. Because it has better sensors it will probably do somewhat better than bad drivers.
You can already see this in the rather stupid lane-finding schemes: they've got one based on road lines which fails if they aren't there. And they've got one based on where people actually drive -- but if the majority of people are weaving out of their lane (*which I expect that they are*), then it's just going to copy the bad drivers!
I already said it would do better than bad drivers, and that bad drivers are typical. Will it be a truly good driver? Not if you train it this way.