CarlK
Active Member
You may want to investigate how human depth perception works.
Can you explain what human can do that camera vision system can't?
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
You may want to investigate how human depth perception works.
Drive carCan you explain what human can do that camera vision system can't?
Drive car
In our "Facebook" world, I can see someone latching on to some Musk soundbite and then when that doesn't turn out to be true, they are bummed.
Can you explain what human can do that camera vision system can't?
Side stepping the question aside just want to make sure you know there is nothing human eyes can do that camera vision system can't.
Can you explain what human can do that camera vision system can't?
Plus, our ability to infer information as we drive (eye contact, seeing a person unfocused, etc) is far above what most AI engines are likely able to extract from cameras today, so “anything” they can have that would give them an unfair advantage over human (adding LIDAR for example) should be leveraged IMO.High framerate (a normal camera needs a LOT of exposure/light to do the same)
Cleaning itself (Tesla does not have this)
Also:
Both have pros and cons, but they are not the same at all.
- A film in a camera is uniformly sensitive to light. The human retina is not. Therefore, with respect to quality of image and capturing power, our eyes have a greater sensitivity in dark locations than a typical camera.
- Eye has 130 million pixels
High framerate (a normal camera needs a LOT of exposure/light to do the same)
Cleaning itself (Tesla does not have this)
Also:
Both have pros and cons, but they are not the same at all.
- A film in a camera is uniformly sensitive to light. The human retina is not. Therefore, with respect to quality of image and capturing power, our eyes have a greater sensitivity in dark locations than a typical camera.
- Eye has 130 million pixels
Could you drive a car by watching a monitor fed by a camera?
Corollary: Are racing video games possible?
What about the opposite: can AP function in a simulation?
If someone set a Tesla AP (v8) system up in front of a console gaming rig running Forza Horizon 4 on a giant screen, disabled the radar and hooked the steering/power/braking from AP into the console, would it be able to "drive"?
Part of the NN training is (probably) done in a simulation, so it should do quite well.... right?
This would be a great hack to see!
Could you drive a car by watching a monitor fed by a camera?
Corollary: Are racing video games possible?
To a very limited degree.
Yes because a) it's rendered perfectly, b) you don't have to worry about yield, pedestrians etc.
Define "camera vision system" as you are using it, since the conversation you joined had a specific definition (single camera) and one identified limitation (depth perception).
Could you drive a car by watching a monitor fed by a camera?
My point being, if a person can drive a car based on the feed from a camera(s), then the difference between eyes and camera is not a limiting factor. Same deal with night vision googles, UAVs, and robots.
Read the article below to understand how camera sensor capability compares to human eyes. This is a more than a decade old article. Camera system has already matched or exceeded human visual capability in pretty much every measure. Sensor technology is way more advanced today than when that article was published. It is safe to say the "sensor" is better than our eyes. A lot of people got a hung up on the "eye" but they don't know it's the "brain" that does most job. That's the reason why Tesla has concentrated on working on the neural net machine learning and improving the AI chip to solve the FSD challenge.
Can you explain what human can do that camera vision system can't?
As for camera to determine distance there are many ways to do it. You can do a search if you're interested in this. Modern consumer cameras, for example, can easily determine image distance, with a single sensor, to perform the autofocus task at a speed better than we could.