1) we don't have cameras with the same quality as the human eyes, definitely not the ones on Tesla cars. 2) we don't have an equivalent to the human brain
1) Irrelevant. You don't need HD resolution for purposes of driving.
2) The AI that will power the self-driving of the future is still being developed (and quite rapidly I might add). At some point of development, it will be better than the human brain for driving purposes because driving is a relatively simple task and computers always pay attention 100%, unlike humans who have "issues" being consistent.
This is incorrect. Camera is the one with the serious limitations in various lighting conditions (limited and poor visibility). Lidar can see in pitch darkness while cameras struggle in the dark. Lidar can see under bright lighting conditions, cameras struggle severely.
I didn't say "low light conditions" I said "poor visibility" referring to snow, mist, rain and fog. The laser needs to make a two way trip from car to distant objects and back to the sensor for it to work. Cameras have a much better go because the light only needs to travel from the object to the car, a one-way trip.
Here's a tesla on AP hitting a deer at night
Here's a tesla on AP failing to see a pedestrian wearing a black coat in the middle of the road at night (driver had to take over)
You are using examples of current technology to prove what future technology can't do? That makes no sense! Obviously, full self driving isn't ready yet! Duh!
This is wrong AGAIN. Camera struggles as much as Lidar in heavy rain and snow.
You don't see that because you see and understand a picture and can make out the content of a picture taken in heavy rain/snow. But a computer sees 3 numbers from 0-255 for every pixel.
This may surprise you but the human eye doesn't "see" in any more real of a sense than the sensor on a camera sees. The rods and cones transmit electrical pulses that are a lot vaguer and less precise than the zeros and ones a camera sensor transmits. The magic is in the AI processing in machine vision and in the brain in human vision. But even an insect, with a tiny brain can see well enough to navigate amongst branches and avoid predators. It may be difficult for you to understand how AI can make sense of the camera sensor info if you have a "binary" thought process or lack imagination. AI is a breakthrough technology and the mechanisms by which machine vision works are little understood or even imagined by many who are unfamiliar with how it learns and "remembers". It is only a viable technology due to vast amounts of cheap processing power. It doesn't involve "if, then" statements, it really is machine vision in the truest sense of the word. How else do you think a dragonfly, with its minimal processing power, can function in a varied and complex world? It doesn't have anything remotely approaching HD quality vision. In fact, it's vision has less resolution than an old-school TV!
The fact that you don't understand how these key processes work in humans and in machine vision are not good reasons why self-driving needs LIDAR to be complete.
With the camera or Lidar system by itself, you have a perception system that can cause an accident every 10k miles.
But having the two systems which have a contrast in pros/cons improves your perception system to a reliable 10^8 (100 million miles), making you about 200x better than a human driver.
I'm saying your premise here is wrong. Specifically, I don't believe the best safety a camera based self-driving system can achieve is a crash every 10K miles. You just pulled that out of thin air. When developed, vision only self-driving systems will provide much better safety than typical human drivers (which is the best we have right now). The improvement will be so dramatic that insurance rates will go down and, along with that, medical costs will go down. You have no basis to claim a camera only system will crash every 10K miles or that a hybrid camera/LIDAR system will reach a higher safety level before a vision only system achieves better than human safety. Sometimes simpler is better/faster. It's a race to the finish line and I believe the extra complexity of integrating LIDAR will slow down efforts using LIDAR to the point that vision only systems will be the clear winner to the finish line.
Time will tell.