Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why AP 2.0 Won't Be Here Soon, and It Won't Be What You Think It Is

This site may earn commission on affiliate links.
Exactly.. so the inputs *are* different.

But again, the fleet learning is to learn slowly changing or static geographic attributes of each road.. not the for pickup truck that has things falling off the back or the SUV driver checking their snapchat who just swerved into my lane.




Then one or more of the inputs is random. Otherwise, computers wouldn't be good at what they are designed and built to do.

I don't see why those are difficult situations, assuming the car (a future car, not the current model which doesn't have a good 360 view) has good situational awareness. It will either brake or swerve. Look, video game AI can do this kind of stuff. Given good knowledge of the surroundings, the computer can evaluate 100 different escape routes, rank them, and start executing the best one before you notice the first barrel hit the pavement.
 
Guessing of version numbers aside, it reads like the actual control won't happen until there's significant fleet learning with geotagging potential false braking events. I'm speculating this might not happen automatically with 8.0, but only after Tesla verifies it's working correctly.

Where are you getting that from? What Elon posted just said the several AP equipped Teslas have to drive a route. I wouldn't classify several as significant.

And remember they have lots of EAP drivers already using 8.0, so it has likely already completed the whitelisting for a lot of the heavily travelled routes. (So the enhanced AEB should start working in a lot of places "out of the box".)
 
Where are you getting that from? What Elon posted just said the several AP equipped Teslas have to drive a route. I wouldn't classify several as significant.

And remember they have lots of EAP drivers already using 8.0, so it has likely already completed the whitelisting for a lot of the heavily
travelled routes. (So the enhanced AEB should start working in a lot of places "out of the box".)

I'm going to screw all this whitelisting up by driving my Tesla through a bunch of signs every day.
 
But all of that is just so many dry leaves blowing in the breeze next to the real question: now that you have it, do you love it? Can you
imagine a scenario in which waiting for something better/different would have been the right choice?
No I can't and I am very happy with my S. I purchased with the attitude that no matter what comes next I will not regret my decision which was based on info available at that time.
Reading these forums though I know many people are still waiting for 2.0 or the next big thing. That's fine but if that is the only reason they are missing out on a great car that turns heads everywhere I go.
 
I'm going to screw all this whitelisting up by driving my Tesla through a bunch of signs every day.
Once the M3 comes out (and punk kids can afford them) this suggests a new form of mass monkeywrenching: people just drive
around randomly braking (but repeatedly in the same places), running stop signs and lights, and other behaviors to totally mis-train
fleet learning. A Clockwork Orange meets Tesla ;)
 
So you volunteering to prove that humans DON'T decide the same given the same inputs (same conditions as computers)?

I mean come on, really? I think it's trivial to prove that humans are fallible and can do completely different things given the same inputs. I'm not sure what your point is here.

It's also impossible to set up the controls to limit humans to the same set of inputs like you can with computers that have a limited number of A-D converters for inputs and not infinite analog input devices like humans.

So the premise doesn't even work.
 
Where are you getting that from? What Elon posted just said the several AP equipped Teslas have to drive a route. I wouldn't classify several as significant.

And remember they have lots of EAP drivers already using 8.0, so it has likely already completed the whitelisting for a lot of the heavily travelled routes. (So the enhanced AEB should start working in a lot of places "out of the box".)

I'm getting this from every Autopilot capable route in the country being calculated "several" times. That's a significant amount of fleet learning.

Sure, Southern California might get great data pretty quickly but there are a lot of roads to cover.
 
What if you braked hard before an overhead sign and then "steered around it"? What if three owners did that on purpose? What happens to the 4th? A random sudden braking?

Then as a group you might be able to mis-train the system. However, if for the 4th person it gives the AEB warning and they just step on the accelerator and go through it without an accident the system will probably nullify the previous training and go back to learning mode on that road. I think if Tesla catches people doing that they should disable their AP.

Also, it isn't every overhead sign, it is an overhead sign that looks like it is in your path and would collide with it. So something like an overhead sign on a bridge just past a hill you are cresting.
 
Once the M3 comes out (and punk kids can afford them) this suggests a new form of mass monkeywrenching: people just drive
around randomly braking (but repeatedly in the same places), running stop signs and lights, and other behaviors to totally mis-train
fleet learning. A Clockwork Orange meets Tesla ;)
Sweet. however since Tesla is recording these, by definition, it will likely lead to banning of the 'bad boys', likely for an extended time.
 
I mean come on, really? I think it's trivial to prove that humans are fallible and can do completely different things given the same inputs.

EXCELLENT! I look forward to your trivial proof that humans make different decisions based on the same exact input.

Since this question has been unanswered for the 2500 years since it was first asked, I am not sanguine.

Thank you kindly.
 
Since this question has been unanswered for the 2500 years since it was first asked, I am not sanguine.

When was it first asked?

And you seemed to ignore my comment that the entire premise is flawed since it's impossible to give humans the "exact same inputs". People change their mind on this based on the time of day, their mood, what they had for lunch, and if they had a fight with their spouse.

Take gambling for instance. Limit the "exact same inputs" to the status of the cards, or the dice, or roulette wheel. People make different decisions ALL THE TIME with the "exact same inputs" because they might or might not "feel lucky" or they're "on a hot streak".
 
And you seemed to ignore my comment that the entire premise is flawed since it's impossible to give humans the "exact same inputs". People change their mind on this based on the time of day, their mood, what they had for lunch, and if they had a fight with their spouse.

I ignored it, since it supported my case and defeated yours, I didn't think I needed to add anything. If humans are never in a same input condition, your claim that they make different decisions given the same inputs has zero empirical evidence, by your own admission.

Take gambling for instance. Limit the "exact same inputs" to the status of the cards, or the dice, or roulette wheel. People make different decisions ALL THE TIME with the "exact same inputs" because they might or might not "feel lucky" or they're "on a hot streak".

'on a hot streak' versus 'not on a hot streak' is exactly what I would consider NOT the same inputs. Having won the last 10 rolls, is decidedly not the same input as having lost the last 10 rolls. Why would you think it was? Gambling programs take that into consideration why wouldn't humans?

Thank you kindly.
 
I ignored it, since it supported my case and defeated yours, I didn't think I needed to add anything. If humans are never in a same input condition, your claim that they make different decisions given the same inputs has zero empirical evidence, by your own admission.



'on a hot streak' versus 'not on a hot streak' is exactly what I would consider NOT the same inputs. Having won the last 10 rolls, is decidedly not the same input as having lost the last 10 rolls. Why would you think it was? Gambling programs take that into consideration why wouldn't humans?

Thank you kindly.

I would hope gambling programs (and other predictive models based on independent event probabilities) wouldn't take that into consideration, because it's the gambler's fallacy. Gambler's fallacy - Wikipedia, the free encyclopedia