Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla now using deep neural net for auto wipers (2019.40)

This site may earn commission on affiliate links.
"these kinds of neural nets need tons of real world data" This is true.

I personally think Tesla's minimalist approach is quite elegant, but it should perhaps do a swipe when you start up as my old Merc used to do. Or it should perhaps consult the local weather forecast.
 
I am reminded of the space race. The US spent $millions (allegedly) developing a pen which would write in zero gravity. The Russians spent $1 on a pencil.

$5 sensors work on every other car, why on earth spend money on a NN to figure out it is raining.

Not true, both started with pencils and switched to pens. Pen wasn't funded by NASA.
Fact or Fiction?: NASA Spent Millions to Develop a Pen that Would Write in Space, whereas the Soviet Cosmonauts Used a Pencil

Again, the rain sensors only detect in the area immediately in front of them, they do not provide coverage for a tri camera module. That requires analyzing the actual data stream from the camera.
 
No, it has 3 forward-facing cameras but none focussed on the windscreen. It is almost impossible to notice drops on these cameras due to this focal length. Now rain NN must understand how things look correct and what is "distorted" as it isn't seeing drops.
The wide angle lens is used for detecting rain. The wide angle lens focal length is zero, like a fish eye lens, so yes it can focus on the windscreen fine.
 
I'm encouraged by the suggestion that the wiper software will learn from my experiences. Most of my complaints about automatic wiper function could be improved by an option to adjust the sensitivity to my preferences.

Here's the relevant quote from the release notes posted by the OP:
"If automatic wipers is not performing to your preference, any manual adjustment to wiper speed will be captured to further train and improve the network in future software updates."

So my interpretation of what the release notes say is that, while the wiper software will learn from users' manual wiper adjustments, your individual car will not adjust to your individual preferences.

Instead, Tesla will attempt to use fleet-wide data to make fleet-wide changes to the software.
 
The wide angle lens is used for detecting rain. The wide angle lens focal length is zero, like a fish eye lens, so yes it can focus on the windscreen fine.

Problem is that it sees only it's local part of the windshield. It's very common for the lower part where your field of vision is to be sprayed first before it creeps up to the camera area.

So it needs to get better at inferring how much it's covered below from where the raindrops come (ie. what direction the droplets are, if they point upwards, the spray comes from below and you'd have to assume 10x more)
 
Though I am very new to the Tesla model 3 I do know about neural networks. The training is done on a big computer somewhere. The error backpropagation algorithm for a deep NN is computationally intensive. But, once trained, the neural network can be deployed on low processing power devices, such as our Tesla 3 computer.

This assumes the rain in Spain is the same as the rain everywhere else, I suppose.

thats applying a sledgehammer to a nail. rain can be different everywhere. effect of rain is the same everywhere - reduced focus (read quantitatively as reduced contrast); varying contrast across the screen; etc.

the detection logic is simple.

somebody messed up somewhere. if a NN needs terabytes of data to learn what is and what isnt rain, the model selection is poor (backpropagating error is just 1), or they used the wrong tool from the getgo
 
There seems to be some value judgement here but perhaps you have access to more information about this than is available on the forum? Could you share it please?

If you as a developer / engineer fall in love with one way of doing things you tend to apply it to everything, even if other solutions are more efficient. Sometimes it will be like hammering a cube through a circular hole.
 
  • Like
Reactions: motocoder
This may well be correct some of the time. On the other hand, to use existing sensors and develop sophisticated algorithms to process their information may fit better with Tesla's minimalist approach rather than adding another sensor. It has been said that the algorithms are set to learn from user input but that the improvements are done offline and included in future software updates.
 
It can be done via camera but it deep learning may not be the answer to everything. Maybe some classical computer vision would have solved it just as fine with hogue transforms and whatnot. But it will be interesting to see how well it performs in 2919.40.

Also if a rain sensor costs 5$ you don't need to sell more than 100-200k cars to break even if you solve it with vision. But there are several ways to solve it with vision.
 
Deep learning may not be the answer to everything, but the suggestion is that this is what they have chosen for now. As I say, it makes use of the existing sensors which makes things more simple. That is their choice again. Perhaps you should consider the implementation to be somewhat of a beta like some of the other aspects of the Tesla.

It is your right to suggest it should be done another way.
 
With more training we might collectively help the neural network to be better than any conventional control system in less ambitious autos. I personally would not rate it less capable than my old Merc or my new Golf. But we are all free to forge our own opinion.
 
Or sleet, or volcanic ash etc.? Yes, this is the philosophy behind simple hardware and sophisticated software. We are the beta testers and trainers, though we have to wait for software updates to reap the fruits.

We have one set of eyes on our mammalian heads. They are not the best optical design. At the back is neuronal tissue which grew out on the optic nerve from the brain in our embryonic development and it does the first layers of processing of the signal. It's called the retina. Then the signal goes through other bits of the brain and ends up in the visual cortex. That learns by example and training starting straight after birth. Simple hardware, sophisticated software. The system ends up, at least the Inuits' does, allegedly, being able to distinguish between 12 kinds of snow.
 
Or sleet, or volcanic ash etc.? Yes, this is the philosophy behind simple hardware and sophisticated software. We are the beta testers and trainers, though we have to wait for software updates to reap the fruits.

We have one set of eyes on our mammalian heads. They are not the best optical design. At the back is neuronal tissue which grew out on the optic nerve from the brain in our embryonic development and it does the first layers of processing of the signal. It's called the retina. Then the signal goes through other bits of the brain and ends up in the visual cortex. That learns by example and training starting straight after birth. Simple hardware, sophisticated software. The system ends up, at least the Inuits' does, allegedly, being able to distinguish between 12 kinds of snow.

We're not question how. We're questioning, why?

Simple hardware and sophisticated software. But we already have simple hardware and simple software (controls/logic), that works.

I can send a sample of pyrite and a sample of gold to a lab to run xray diffraction to tell the 2 apart. Or i could, you know, jab it with a knife to see which is softer.