Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Right you are, my bad! We have indeed discussed in the past the differences between a natural and an artificially applied torque. And I mostly agree with your thoughts, just exploring some differences......,

We always did feel the same
We just saw it from a different point of view
Tangled up in blue........
Dead to Dylan.
The following is a quote from a movie, can’t remember the name of it and it’s actually a misquote. The movie was about a guy who died and all his friends got together in Michigan to bury him. Glenn Close and that tall guy from Jurassic Park and others…my favorite character was the coke head.

I love you doc, but there’s been a lot of good music in the past 50 years. 😀

Edit: The Supertramp ref in good time. Hats tipped for Christine
 
@PACEMD years ago, I saw a sample of the log that service needed to track down an issue in my car. I think it was remote service, and the guy let me take a closer look. In that log, I saw torque-related log entries, something like "insufficient torque applied to the wheel" or something to that effect. And I noticed that there were a lot of those, despite me generally not getting many nags while APing on the highway. It seems like a measurement is taken every second. So let's say the nag is triggered if "insufficient torque" is applied to the wheel for 30-sec straight. Then as long as I apply sufficient torque between any of those logged messages within the 30-sec window, I avoid the nag.

But this is proof that despite my thinking that I'm always applying constant torque, I clearly am lapsing here and there for whatever reason. I have no idea how sensitive/precise the torque sensor is, but I can imagine that if it goes to a tenth of a Nm, there's going to be a LOT of fluctuation when a human is applying the torque. However a wheel weight would likely not jiggle the tenths digit much at all. It seems pretty trivial for Tesla to find a threshold to separate human from weight.

The best way for a human to attempt to apply constant torque to anything spinny is to try to lock the hand/arm, or anchor it by bracing it to an immovable object, like the side of the door, or a knee pinned to the side of the door. But in a moving car, that ends up not working because the road vibrations and bumps create torque variation because you've essentially prevented the wheel from moving at all. You can even imagine a situation where a hard enough bump will cause a disengagement if the wheel is not allowed to absorb some of that shock and is held rigid.

Contrast that with a wheel weight. The wheel is still somewhat free to spin and absorb shocks and bumps, which will smooth out the torque line. Bottom line is it's quite impossible for a human to simulate the type of torque that a weight would produce, and it's really not relevant whether the human-produced torque is constant or not; it will have a different signature and therefore Tesla can distinguish between human and weight.
 
@PACEMD years ago, I saw a sample of the log that service needed to track down an issue in my car. I think it was remote service, and the guy let me take a closer look. In that log, I saw torque-related log entries, something like "insufficient torque applied to the wheel" or something to that effect. And I noticed that there were a lot of those, despite me generally not getting many nags while APing on the highway. It seems like a measurement is taken every second. So let's say the nag is triggered if "insufficient torque" is applied to the wheel for 30-sec straight. Then as long as I apply sufficient torque between any of those logged messages within the 30-sec window, I avoid the nag.

But this is proof that despite my thinking that I'm always applying constant torque, I clearly am lapsing here and there for whatever reason. I have no idea how sensitive/precise the torque sensor is, but I can imagine that if it goes to a tenth of a Nm, there's going to be a LOT of fluctuation when a human is applying the torque. However a wheel weight would likely not jiggle the tenths digit much at all. It seems pretty trivial for Tesla to find a threshold to separate human from weight.

The best way for a human to attempt to apply constant torque to anything spinny is to try to lock the hand/arm, or anchor it by bracing it to an immovable object, like the side of the door, or a knee pinned to the side of the door. But in a moving car, that ends up not working because the road vibrations and bumps create torque variation because you've essentially prevented the wheel from moving at all. You can even imagine a situation where a hard enough bump will cause a disengagement if the wheel is not allowed to absorb some of that shock and is held rigid.

Contrast that with a wheel weight. The wheel is still somewhat free to spin and absorb shocks and bumps, which will smooth out the torque line. Bottom line is it's quite impossible for a human to simulate the type of torque that a weight would produce, and it's really not relevant whether the human-produced torque is constant or not; it will have a different signature and therefore Tesla can distinguish between human and weight.

Yes I guess I have been saying “constant” torque, but that is really not what is happening exactly.

The point is that the signature of a weight is completely different (for the reasons you mention, etc.), and can be easily distinguished if they can measure the torque applied to the wheel.
 
The fun thing is that wheel weight users probably supplied a lot of the test data used by Tesla to hone the weight detection algorithm.
You're right about that. If they are detecting the devices already why doesn't Tesla just make a new alert saying defeat device detected, remove immediately to restart usage of autopilot/FSD? Are they worried about false positives?
 
You're right about that. If they are detecting the devices already why doesn't Tesla just make a new alert saying defeat device detected, remove immediately to restart usage of autopilot/FSD? Are they worried about false positives?
They already tell you to keep your hands on the wheel every single time you engage AP/NOA/FSDb. Why should they cater to people who refuse to follow simple instructions?
 
They already tell you to keep your hands on the wheel every single time you engage AP/NOA/FSDb. Why should they cater to people who refuse to follow simple instructions?
As a philosophical question that also ends up being a very practical regulatory question, to what extent is a carmaker obligated to prevent a feature from being misused and in the event of such misuse should the carmaker (and are they) at all liable? If they know of such misuse, does that change the equation?

A year or so ago, Consumer Reports wrote an article critical of Tesla because they cold defeat the Autopilot safety mechanisms - they fastened the seatbelt, put a weight in the seat and hung a weight from the steering wheel, then complained that they were too easy to circumvent. The problem is, they went to a significant amount of trouble to do so and cannot by any stretch claim that they did not know what they were doing was wrong or potentially dangerous. As such I fail to see how they could claim it was Tesla's fault for not designing a system which can't be circumvented. In my eyes, such efforts would put the fault and liability squarely on the user.

As an example, I'll raise the issue of speeding. Virtually all cars built today can go in excess of 100 MPH. There are virtually no places in the U.S. where that is allowed. As we know, A GPS system is also fairly easy to implement. Speeding kills. That being the case, shouldn't carmakers be required to put a limiter on the car to prevent people from going more than 80 MPH? Or to have a GPS-linked limiting system so they can't speed on residential streets? Clearly they can do it so it must be their responsibility to ensure people are driving safely and not the drivers themselves.
 
They already tell you to keep your hands on the wheel every single time you engage AP/NOA/FSDb. Why should they cater to people who refuse to follow simple instructions?
They don't have to cater to anyone. However, like the person that just posted above me, what responsibility do they have to "enforce" that a defeat device not be used? If the safety mechanism is there to make sure that people are using it as intended and people figure out a way to get around it, why do they care? Do they care about bad press or regulatory pressure since the field overall is pretty new?

I don't know of any data about accidents/safety events being directly caused by autopilot/FSD + defeat device or if it is a frequent occurrence.
 
  • Like
Reactions: FSDtester#1
As a philosophical question that also ends up being a very practical regulatory question, to what extent is a carmaker obligated to prevent a feature from being misused and in the event of such misuse should the carmaker (and are they) at all liable? If they know of such misuse, does that change the equation?

A year or so ago, Consumer Reports wrote an article critical of Tesla because they cold defeat the Autopilot safety mechanisms - they fastened the seatbelt, put a weight in the seat and hung a weight from the steering wheel, then complained that they were too easy to circumvent. The problem is, they went to a significant amount of trouble to do so and cannot by any stretch claim that they did not know what they were doing was wrong or potentially dangerous. As such I fail to see how they could claim it was Tesla's fault for not designing a system which can't be circumvented. In my eyes, such efforts would put the fault and liability squarely on the user.

As an example, I'll raise the issue of speeding. Virtually all cars built today can go in excess of 100 MPH. There are virtually no places in the U.S. where that is allowed. As we know, A GPS system is also fairly easy to implement. Speeding kills. That being the case, shouldn't carmakers be required to put a limiter on the car to prevent people from going more than 80 MPH? Or to have a GPS-linked limiting system so they can't speed on residential streets? Clearly they can do it so it must be their responsibility to ensure people are driving safely and not the drivers themselves.
I still feel that Autopilot and other pseudo-autonomous systems are in a class by themselves for needing the best driver monitoring.

Sure, you can defeat cruise control, you can speed, you can defeat breathalyzer interlocks, but you still have to be actually making some driver actions or you'll crash in seconds.

Defeating autopilot-type systems means your car could in theory drive for hours without you paying any attention to the road at all until your fuel runs out or you initiate a crash, if you're even in the car at all.

There has never been a consumer product quite so potentially dangerous as this before, a responsible monitoring system is definitely required.
 
  • Like
Reactions: FSDtester#1
I still feel that Autopilot and other pseudo-autonomous systems are in a class by themselves for needing the best driver monitoring.

Sure, you can defeat cruise control, you can speed, you can defeat breathalyzer interlocks, but you still have to be actually making some driver actions or you'll crash in seconds.

Defeating autopilot-type systems means your car could in theory drive for hours without you paying any attention to the road at all until your fuel runs out or you initiate a crash, if you're even in the car at all.

There has never been a consumer product quite so potentially dangerous as this before, a responsible monitoring system is definitely required.
Don't see how AP is more dangerous than a speeding car with a drunk behind the wheel. Tech exists to prevent both already.
The fact that Tesla are now taking steps to detect and deselect users of defeat devices signals two things to me.
First is they are obviously being much more proactive about people sidestepping the checks
Second is the high likelihood that any accidents with AP enabled will be flagged by Tesla as having a driver willing to bypass safety controls.
Connecting that to the increased legal team hiring and the type of actions now being seen in China, Tesla are signalling they will be less passive with this stuff.
 
  • Like
Reactions: FSDtester#1
Quick reminder with respect to the discussion what's the "correct" way of driving. For Tesla's FSD team, it's an optimization problem within these 3 dimensions:
Screenshot_20221203-192431_YouTube.jpg

Source: AI day youtube
 
Don't see how AP is more dangerous than a speeding car with a drunk behind the wheel. Tech exists to prevent both already.
The fact that Tesla are now taking steps to detect and deselect users of defeat devices signals two things to me.
First is they are obviously being much more proactive about people sidestepping the checks
Second is the high likelihood that any accidents with AP enabled will be flagged by Tesla as having a driver willing to bypass safety controls.
Connecting that to the increased legal team hiring and the type of actions now being seen in China, Tesla are signalling they will be less passive with this stuff.
Using AP should not really be that dangerous, since using AP implies "driver is responsible for control and is attentive".

Mis-using AP by using defeat devices is the problem.

Kudos to Tesla for reducing the use of defeat devices. It's not perfect, but getting better.

In answer to your first question. A speeding drunk is very likely to get into an accident. A sleeping speeding drunk in a defeated-AP car is guaranteed to get into an accident at least until AP is door-to-door safe. A defeated-AP car could be used by stupid people to "send" their car somewhere for a tik-tok challenge, by kids who are goofing off (true of all cars but the ease of doing so is lower), by let's say bad people to position the car in a certain place for bad reasons, or just by people who think it's ok to be distracted and not pay attention.
 
All of the discussion goes back to what I'm saying. Why don't they just immediately stop AP/FSD usage if a defeat device is detected. I'm not sure anyone would use one if they could never use AP/FSD. Seems like if they already know with confidence who is using it they can immediately respond and eliminate their usage and thus any liability in a problem for the future involving an accident while someone is using one of these devices.
 
All of the discussion goes back to what I'm saying. Why don't they just immediately stop AP/FSD usage if a defeat device is detected. I'm not sure anyone would use one if they could never use AP/FSD. Seems like if they already know with confidence who is using it they can immediately respond and eliminate their usage and thus any liability in a problem for the future involving an accident while someone is using one of these devices.
Well two possible reasons they might not "immediately stop AP/FSD usage if a defeat device is detected":

First to reduce false positives they might need to collect enough data to decide if a defeat device might be being used.

Second by not telling you it suspects a defeat device, then you cannot as easily devise a better one or argue with them about it. In fact you may continue using the defeat device and get another strike. Why should Tesla make it any easier for drivers who are inclined to try to fool the system.

Of course neither of these reasons may be what's going on at all.
 
Over the years there have been many postings on whether new releases learn. Not in the tradition AI sense but something has changed for me.

Starting with the .69 releases my car would phantom brake from 30 mph down to 2-5 mph every time I drove on Industrial Ave and passed the Lowell Connector exit ramps regardless of the direction. Probably had this behavior on over 50 consecutive drives. Never happened before .69.
On my first 2-3 drives on 10.69.3.1 I had the same behavior. Starting yesterday it longer does phantom braking and drives normally just like it did prior to my first .69 release.

What happened?

Google Maps
 

Attachments

  • Lowell Connector Exits onto Industrial Ave.png
    Lowell Connector Exits onto Industrial Ave.png
    487.3 KB · Views: 30
I wish we had more idea on what we're gonna get in December from FSD. Radio silence from Elon on Tesla related topics makes the car seem like any other car. It's the OTA updates that make the car feel so amazing.

I was just reflecting on the last year of updates. From blind spot camera use, the leaps that FSD has made to improve comfort and convenience, more superchargers everywhere to allow faster and more convenient charging during road trips, I just can't imagine buying into a better car ecosystem at the moment.
 
Over the years there have been many postings on whether new releases learn. Not in the tradition AI sense but something has changed for me.

Starting with the .69 releases my car would phantom brake from 30 mph down to 2-5 mph every time I drove on Industrial Ave and passed the Lowell Connector exit ramps regardless of the direction. Probably had this behavior on over 50 consecutive drives. Never happened before .69.
On my first 2-3 drives on 10.69.3.1 I had the same behavior. Starting yesterday it longer does phantom braking and drives normally just like it did prior to my first .69 release.

What happened?

Google Maps
Has there been a recent map update?
 
  • Like
Reactions: sleepydoc