Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Self-driving cars UK Discussion

This site may earn commission on affiliate links.
Our 'vision' is not defined by the eye, the visual cortex of our brains is what does the magic. The 'raw' image from our retina is upside down, interrupted by blood vessels, and actually only have tiny FOV. The 'image' we all see right now doesn't exist in any optical setting, it's a construct produced by our brain.
Ah yes, now you are talking my language. The eye and brain are wonderful things :)

It reminds me of a time when I was debating the merits of 4k TV's, what size they need to be, how far away you need to be seated and all that jazz. Doing some testing on a 50 inch TV from almost 20ft away, I could quite easily see the benefits. However, when I used a test pattern, which is basically a checker board of black and white pixels, all I could see was grey.

I had to get really close to the TV before I could make out the individual pixels. At the same time, I was still sure I could see the difference much further away, but couldn't really prove it. My theory was that my eye could definitely see the difference, but my brain was like, no, you can't. It just decided to throw away the information it was receiving and turn it into something different, which was just a grey screen. Possibly trying to save me from eye strain?

Anyway. It was later one morning when I was sitting outside having a coffee, contemplating the meaning of life, when I had the Eureka moment. The low sun was shining when suddenly something caught my attention out of the corner of my eye. It was the glistening of a single strain of spider web, which had a little bit of moisture on it from the morning dew. Less than the width of a hair, and I must have been 30ft or so away and there it was, clear as day. If I tilted my head slightly, the sunlight wasn't lighting it up and it became invisible again. It never changed in size, but depending on what way the sun hit that single strand, it went from being invisible to visible. It was the high contrast that made my brain take notice. Once that high contrast was taken away (by tilting my head a little) my brain decided not to see it again.

Thats when I realised that those 4k charts that tell you screen size to viewing distance ratio are a pile of rubbish. If you sit x distance, you need a TV of y size.

The brain does a wonderful job of bringing important things to your attention and throwing away stuff that doesn't really matter. Tesla trying a vision only FSD process, based on the concept of having several sets of eyes is an interesting one but one I can't get behind. Why try to replicate a human eye (and brain) which has inherent flaws, when you could replicate a hawk? They can see way further than we can. Plus, how our brain creates a stereo image is something that Tesla can't really replicate as effectively.

Well that's a weird post when I read it back to myself, but may as well hit the reply button anyway :)
 
  • Like
Reactions: Wol747 and CWT3LR
Doing some testing on a 50 inch TV from almost 20ft away, I could quite easily see the benefits. However, when I used a test pattern, which is basically a checker board of black and white pixels, all I could see was grey.

I had to get really close to the TV before I could make out the individual pixels. At the same time, I was still sure I could see the difference much further away, but couldn't really prove it. My theory was that my eye could definitely see the difference, but my brain was like, no, you can't. It just decided to throw away the information it was receiving and turn it into something different, which was just a grey screen. Possibly trying to save me from eye strain?
Right result, wrong conclusion. What was your control source? You cannot discern pixels at that distance. You probably preferred the 4k TV because of HDR, an improved colour gamut, or other setting.
That's when I realised that those 4k charts that tell you screen size to viewing distance ratio are a pile of rubbish. If you sit x distance, you need a TV of y size.
No, the charts are actually correct :D

The brain does a wonderful job of bringing important things to your attention
It also does a fantastic job of fooling you with optical illusions, magic eye patterns, terrible aural memories etc.
 
  • Like
Reactions: Adopado
Plus, how our brain creates a stereo image is something that Tesla can't really replicate as effectively.

The DVLA doesn't care about stereo vision at all :).

The 'need' for image quality is interesting, all of us here in literally a split second can understand the dangers, and road lay out based on this one single still image that is about the same resolution as what the Tesla camera produces.

Now add in wide angle, and zoom for the front, side cameras, and rear, with a live video feed.....the problem is not resolution but how much CPU time it takes to combine all that data and compute it.

At the last 'FSD' day, the suggestion was the current AP3.0 CPU isn't actually using the full resolution of the camera current suite due to bandwidth issue. Infact it seems like without DOJO operational Tesla hasn't even got the CPU capability to 'train' the NNs on the combined video inputs.

The barrier to FSD isn't sensory, its CPU and code. The visual cortex of our brain takes up 20% of total surface area, that's without 'decision' making bit taken into account. Tesla/Google/Mobileye are tyring to replicate literally millions of years evolution into a tiny bit of silicon that can be mass produced at affordable level.

If we do manage to achieve the AI needed for FSD, we (humans) really do deserve a pat on our back, before the AI takes over without us realising :).

hazard-test.png
 
  • Like
Reactions: Vidyutyaan
Why try to replicate a human eye (and brain) which has inherent flaws, when you could replicate a hawk?

Because we are the current apex species on this planet, and even hawks use vision, infact pretty much every animal (even most noctural ones) have evolved eyes and a visual cortex totally independently of each other.

Evolution is the harshest test for any system, it would be very arrogant to think we can do a better job :).
 
  • Like
Reactions: 2020M3SR
Right result, wrong conclusion. What was your control source? You cannot discern pixels at that distance. You probably preferred the 4k TV because of HDR, an improved colour gamut, or other setting.

No, the charts are actually correct :D


It also does a fantastic job of fooling you with optical illusions, magic eye patterns, terrible aural memories etc.
I see your points but respectfully disagree.

Those charts assume 20 20 vision for a start but look how many people wear glasses. Good corrective lenses can improve your vision beyond that of someone who has perfect 20 20 vision.

However, 20 20 vision actually isn't perfect and really, 20 20 isn't that great. Many people can double that naturally. At the optician, when you read the line of letters at 20 20. Many can read the line of smaller letters below.

Effectively, those 4k charts that might say you need to sit 10ft from your TV of choice, actually, for some people, that's 20ft.

So straight away, before we even get to the point of how our brain sometimes throws away information that the eye picks up, those 4k charts are already in the bin, simply because of the discrepancies between each individuals ability to see.
 
Last edited:
Exactly ... 4k doesn't usually come along on its own ... HDR is the magic extra
My test subject was my own photographs and recognised industry standard test patterns.

Photos were of the same subject, flowers for example, using artificial lighting at night time (to ensure no change in lighting conditions) same ISO, aperture and shutter speeds to ensure everything was a constant except the resolution

I'm familiar with conducting experiments and trying to remove as many variables as possible.
 
  • Informative
Reactions: Adopado
I see your points but respectfully disagree.

Those charts assume 20 20 vision for a start but look how many people wear glasses. Good corrective lenses can improve your vision beyond that of someone who has perfect 20 20 vision.

However, 20 20 vision actually isn't perfect and really, 20 20 isn't that great. Many people can double that naturally. At the optician, when you read the line of letters at 20 20. Many can read the line of smaller letters below.

Effectively, those 4k charts that might say you need to sit 10ft from your TV of choice, actually, for some people, that's 20ft.

So straight away, before we even get to the point of how our brain sometimes throws away information that the eye picks up, those 4k charts are already in the bin, simply because of the discrepancies between each individuals ability to see.
The charts have ranges from around 5ft-12ft. You are not seeing pixel differentiation at 20ft unless you were bitten by a radioactive spider and fight crime in your spare time. The effects and improvement are all attrituable to HDR and probably extra nits brightness. You still haven't said what your control TV was and how you ran the tests side by side etc.
 
The DVLA doesn't care about stereo vision at all :).

The 'need' for image quality is interesting, all of us here in literally a split second can understand the dangers, and road lay out based on this one single still image that is about the same resolution as what the Tesla camera produces.

Now add in wide angle, and zoom for the front, side cameras, and rear, with a live video feed.....the problem is not resolution but how much CPU time it takes to combine all that data and compute it.

At the last 'FSD' day, the suggestion was the current AP3.0 CPU isn't actually using the full resolution of the camera current suite due to bandwidth issue. Infact it seems like without DOJO operational Tesla hasn't even got the CPU capability to 'train' the NNs on the combined video inputs.

The barrier to FSD isn't sensory, its CPU and code. The visual cortex of our brain takes up 20% of total surface area, that's without 'decision' making bit taken into account. Tesla/Google/Mobileye are tyring to replicate literally millions of years evolution into a tiny bit of silicon that can be mass produced at affordable level.

If we do manage to achieve the AI needed for FSD, we (humans) really do deserve a pat on our back, before the AI takes over without us realising :).

hazard-test.png
Many drivers will get frustrated driving slowly behind this bike. Then they would attempt to overtake without offering much width and then instantaneously cut through back to the lane without even seeing the cyclist. They would give bugger all to the school sign while overtaking.
AI must replace humans to save humanity.
 
Many drivers will get frustrated driving slowly behind this bike. Then they would attempt to overtake without offering much width and then instantaneously cut through back to the lane without even seeing the cyclist. They would give bugger all to the school sign while overtaking.
AI must replace humans to save humanity.
Hopefully AI chooses the greater good and just mows down the cyclist, who doesn’t care about anyone else - then it’s not our problem but Elon who will be living on Mars without extradition treaty.
 
Hopefully AI chooses the greater good and just mows down the cyclist, who doesn’t care about anyone else - then it’s not our problem but Elon who will be living on Mars without extradition treaty.
Very funny. Try learning about AI and how they are built, tested, validated, and improved instead. So many wonderful courses are available on many platforms.
 
  • They added traffic light recognition, people with FSD will stop for them (I think?) and go on green. We all apparently have chime on green now (not experienced it yet)

Didn't see anyone mention this in my brief scan through of the thread: In the UK, FSD will stop for traffic lights - even if they are green. The driver must press the accelerator or pull down the gear stalk to confirm that it's OK to proceed. It's pretty useless at the moment.

On the general subject of self-driving in the UK: I'd love to see some progress on FSD beta over here, but I'd be happy to just have automatic lane changes etc. along with the newer visualisations.
 
Didn't see anyone mention this in my brief scan through of the thread: In the UK, FSD will stop for traffic lights - even if they are green. The driver must press the accelerator or pull down the gear stalk to confirm that it's OK to proceed. It's pretty useless at the moment.

On the general subject of self-driving in the UK: I'd love to see some progress on FSD beta over here, but I'd be happy to just have automatic lane changes etc. along with the newer visualisations.
Yes, FSD does stop at traffic lights unless you press the accelerator to ignore them
 
Many drivers will get frustrated driving slowly behind this bike. Then they would attempt to overtake without offering much width and then instantaneously cut through back to the lane without even seeing the cyclist. They would give bugger all to the school sign while overtaking.
AI must replace humans to save humanity.
There's a perfectly good cycle lane on the left. Capital punishment for the guy cycling in the road is fine by me. If you can't teach them - cull them..
 
There's a perfectly good cycle lane on the left. Capital punishment for the guy cycling in the road is fine by me. If you can't teach them - cull them.
Cyclists and pedestrians could use and cross the roads at any time. Highway code has been updated to give them a priority, so cars must wait and travel slowly, irrespective of how inconvenient this might be.