Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Trolley Problem with FSD 12?

This site may earn commission on affiliate links.
Has anyone/big YouTubers ran a trolley problem with FSD Beta?

Havent found anything and im sure it would be a nice viral video if someone makes it (would probably cost a lot and require a stunt driver etc to make it realistic)

Really curious as how impossible scenarios are programed with FSD neural nets or if there is some sort of "governing ethics" code that is actually running (or if there is nothing running and FSD beta just gives up).

Things like two unavoidable crashes scenarios of the top of my head:

-one person/multiple people
-car vs human
-human vs animal
-car vs bus
etc
 
I'm of the opinion that there is no/will not be trolley problem scenarios in any FSD implementation (at least not for a very long time). The way the system works is simply to avoid obstacles. Period. It will not be weighing one obstacle versus another. It will also be scanning the surroundings and maneuvering to avoid those obstacles long before it becomes an either or situation. If something does happen on the part of the other vehicles/obstacles in the surroundings such that hitting something is inevitable, I believe it will simply do everything it can to avoid the nearest object if possible.
 
  • Like
Reactions: APotatoGod
Umm.. Since the driver is still supposed to be fully in control, wouldnt the answer be "The car tells the human driver to take over" to every scenario you lay out in this version and every version of a Level 2 driver assistance system?

There should be no "impossible scenarios" programmed for other than "defer to the human driver".
 
I'm of the opinion that there is no/will not be trolley problem scenarios in any FSD implementation (at least not for a very long time). The way the system works is simply to avoid obstacles. Period. It will not be weighing one obstacle versus another. It will also be scanning the surroundings and maneuvering to avoid those obstacles long before it becomes an either or situation. If something does happen on the part of the other vehicles/obstacles in the surroundings such that hitting something is inevitable, I believe it will simply do everything it can to avoid the nearest object if possible.
Not all obstacles are treated (or recognized) equally. My Model Y on v12.3.4 recently ran over an easily-avoidable 5-gallon paint bucket (thankfully empty). If it had been a person of similar size, I'm sure it would have recognized it and tried as hard as possible to avoid or minimize a collision.

With an E2E trained network, unless it is specifically trained on trolley problems (with the "right answer" predetermined by the training clips), it may not be predictable what the final system will do if faced with one. It can't be completely predictable anyway, since every real-world scenario is at least slightly different from anything ever seen before. And Tesla may not want to put their thumb directly on the trolley-problem scale, for obvious legal reasons. They may have concluded it's better to simply train the system to avoid all collisions, leave trolley problems out of the training set, and let it be uncertain what will happen in a no-win scenario. (And hopefully never have to find out.)

In any case, it's a moot point until the system reaches L4 autonomy. For L2/L3, the correct system behavior would be to panic (red hands on steering wheel of death) and hand back control to the driver.
 
Not all obstacles are treated (or recognized) equally. My Model Y on v12.3.4 recently ran over an easily-avoidable 5-gallon paint bucket (thankfully empty). If it had been a person of similar size, I'm sure it would have recognized it and tried as hard as possible to avoid or minimize a collision.
You're sure? Does that mean you've tried it? ;)

With a 5 gallon bucket sized human?

I have to say that I'm a bit skeptical it would recognize a 5-gallon bucket sized human as a human (I'm even skeptical that it recognizes it as an object at all), but even if it did, I don't think this rises to "trolley problem" type levels. It's either going to take evasive action or it's not. It's not making an ethical choice between two alternatives.


With an E2E trained network...
When we get to an E2E network, perhaps we can revisit this discussion, but I think we are a long way off from a truly E2E network, and even then, the sophistication of the training methods is not such that there will likely not be any bias towards training it towards one alternative versus another trolley problem style (which I think is somewhat along the same lines as what you were saying).
 
It's not making an ethical choice between two alternatives.
As ever, it boils down to training. Show the system a bunch of simulated scenarios of the trolley problem and it'll do its best to match those scenarios as trained when they happen in real life. Assuming that it is trained "ethically", it will respond "ethically". Give it military training and it will hit whoever presents the greatest threat to it. Or exceeds some threat threshold. Whatever.

Ben is observing that the car's perception of the situation will have a role to play as well. There may be three people partially occluded and one person clearly visible, so the car's limited abilities might lead to it choosing to go away from the one person only to end up hitting three. This applies to human drivers, of course, but these neural networks have a lot to do with such limited resources.

All that said, I doubt training for the trolley problem will come up for a while. Well, unless such an event actually happens and somebody in a position of authority takes the world of vehicle autonomy to task over it. They could well outlaw autonomy systems that cannot handle that class of problem according to NHTSA rules and guidelines, whatever they turn out to be.
 
You're sure? Does that mean you've tried it? ;)

With a 5 gallon bucket sized human?

I have to say that I'm a bit skeptical it would recognize a 5-gallon bucket sized human as a human (I'm even skeptical that it recognizes it as an object at all), but even if it did, I don't think this rises to "trolley problem" type levels. It's either going to take evasive action or it's not. It's not making an ethical choice between two alternatives.

Different shape, but probably a similar number of pixels on the camera feed. (Kids are taller but skinnier.) My point is that the network has probably already been aggressively trained to avoid hitting humans specifically. (Or if it hasn't, it should be.) E.g if the choice is to hit a human or hit a human-sized tree, hit the tree. That's not a trolley problem per se, but if anything is going to be special-cased in the training, that should be.
When we get to an E2E network, perhaps we can revisit this discussion, but I think we are a long way off from a truly E2E network, and even then, the sophistication of the training methods is not such that there will likely not be any bias towards training it towards one alternative versus another trolley problem style (which I think is somewhat along the same lines as what you were saying).
FSD v12 is already E2E. (Granted, the video I posted of the paint bucket was on highway, so in that instance it was running the v11 stack.) In what way do you think Tesla is still "a long way off from a truly E2E network"?
 
In what way do you think Tesla is still "a long way off from a truly E2E network"?
Let me rephrase it: I think Tesla is still a long way off from a usable truly E2E network. There are two many edge cases (chasing the 9's) that training a NN alone is going to be able to handle before we get to L4/L5 autonomy.

This is not conclusive by any means, but last month I was on a trip in rural northern New York. New York (like some other states) has a state speed limit that applies wherever the speed limit is not otherwise indicated. So you tend to see signs that say something like "END SPEED LIMIT 30" when you come out of a village. You are presumed to know that that means the speed limit becomes 55, even though it's not explicitly stated. Prior to FSD 12, this of course meant that for the 14 or so miles of the back road between towns that I travel on the car thought the speed limit was 30 (even though I'm pretty sure the map data would indicate a 55 limit, the car started relying on what it "saw" instead).

So on this last trip, with FSD 12, I thought I would try out the auto speed setting that supposedly chooses a "natural" speed. Sure enough on some roads, the car started going 60 mph or so even though the speed limit sign on the display showed 30. In other areas, on the same kind of straight, open roads, it went much slower. My suspicion is that Tesla has been collecting average speeds of vehicles (either directly, or via online traffic tools) and it's using those speeds as the "natural" speeds, rather than letting the NN figure it out.

I know that's not proof, but that certainly is the sense I got observing its behavior, and I think it actually makes sense to do it that way.
 
  • Like
Reactions: Ben W
The trolley problem is an academic exercise, not a real problem in self-driving cars. Nobody pays it any attention in the industry, but the public absolutely loves it, fascinated by the idea of robots deciding who lives and who dies. In the very unlikely event such a situation came up, a car would just do what the law says and follow its right of way. Or do something unpredictable if planning is purely neural network.
 
The trolley problem is an academic exercise, not a real problem in self-driving cars. Nobody pays it any attention in the industry, but the public absolutely loves it, fascinated by the idea of robots deciding who lives and who dies. In the very unlikely event such a situation came up, a car would just do what the law says and follow its right of way. Or do something unpredictable if planning is purely neural network.
Dont see how it's not a real problem. These types of accidents happen every day (and I dont mean the hypothetically of lining up 1 vs 3 people in the middle of the road)...lets say common accidents like oncoming traffic swerving into your lane while there are pedestrians on the side of the road

What does the car do? Hit the oncoming car or swerve into the pedestrians?
 
What does the car do? Hit the oncoming car or swerve into the pedestrians?
I said that already... Hand over control to the human driver who is supposed to be paying attention, ready to take over at all times, because this is not a level 3+ system.

Not sure why you are ignoring that fact, other than the fact you just want to have a hypothetical discussion.
 
Let me rephrase it: I think Tesla is still a long way off from a usable truly E2E network. There are two many edge cases (chasing the 9's) that training a NN alone is going to be able to handle before we get to L4/L5 autonomy.
Completely agreed. L4/L5 FSD is still several years off, and almost certainly unachievable to the required degree of reliability with current hardware.
This is not conclusive by any means, but last month I was on a trip in rural northern New York. New York (like some other states) has a state speed limit that applies wherever the speed limit is not otherwise indicated. So you tend to see signs that say something like "END SPEED LIMIT 30" when you come out of a village. You are presumed to know that that means the speed limit becomes 55, even though it's not explicitly stated. Prior to FSD 12, this of course meant that for the 14 or so miles of the back road between towns that I travel on the car thought the speed limit was 30 (even though I'm pretty sure the map data would indicate a 55 limit, the car started relying on what it "saw" instead).

So on this last trip, with FSD 12, I thought I would try out the auto speed setting that supposedly chooses a "natural" speed. Sure enough on some roads, the car started going 60 mph or so even though the speed limit sign on the display showed 30. In other areas, on the same kind of straight, open roads, it went much slower. My suspicion is that Tesla has been collecting average speeds of vehicles (either directly, or via online traffic tools) and it's using those speeds as the "natural" speeds, rather than letting the NN figure it out.

I know that's not proof, but that certainly is the sense I got observing its behavior, and I think it actually makes sense to do it that way.
There's always going to be a multi-way tension between pre-existing map data, cached databases of other drivers' previous behavior, observations of surrounding drivers' current behavior, and prima facie road conditions. The neural net should be fed all of these data points, and then decide for itself what to do.

It's also going to need a longer-term memory than it currently has, so that if it sees e.g. a temporary "25mph" sign for a construction zone, then passes a bunch of active construction, then reaches the end of the construction zone, it will realize it can speed back up if the construction crew forgot to place an "End 25" sign at the end of the zone, which they often do. (There was a spot like this on PCH a while back for a small construction zone, and FSD assumed the speed was 25mph for the next ten miles.) Being trained exclusively on short video clips is not going to solve this sort of thing.
 
The trolley problem always seems ridiculous because it completely ignores the third option, which is the one any self driving car is going to do: brake as hard as possible without deviating from the lane the car is in. FSD seems to be programmed to avoid going faster than it can see so should always be able to come to a stop before hitting any stationary object. That's why it "phantom brakes" going over steep hills and other situations where it can't see clearly.
 
Dont see how it's not a real problem. These types of accidents happen every day (and I dont mean the hypothetically of lining up 1 vs 3 people in the middle of the road)...lets say common accidents like oncoming traffic swerving into your lane while there are pedestrians on the side of the road

What does the car do? Hit the oncoming car or swerve into the pedestrians?
This is a flawed question, as if those are the (only) two possibilities.

It will probably do neither.

First of all, the oncoming car is going to hit you, not the other way around. Yes, there will be a collision, but the act of "hitting" would be squarely on the other vehicle, not yours.

Your car will be doing its best to avoid all obstacles and obey traffic laws to the best of its ability, probably starting with slamming on the brakes. The likely outcome is that the other vehicle will hit you (unless it successfully takes evasive action).

I simply cannot see any autonomous vehicle making moral decisions (which is essentially what the trolley problem is). It will (should) have guardrails in place to obey all traffic laws (other than in instances where perhaps it has to "legally" cross the yellow line to avoid a parked car or construction, but even then it will not do so unless the path is clear). I do not believe it will swerve off the road/lane to avoid an obstacle if there is another visible obstacle in its path. To do so would violate the law, be liable for any damage/injuries that happen as a result, and land the autonomous vehicle creator in a heap ton of trouble. Believe me, before any fully autonomous systems are in place, there will be very clear regulations that cover this.

Perhaps a better/more realistic scenario, which is not really the trolley problem, is this: will the car swerve off the road to avoid being hit even though it may risk damaging the vehicle (like scraping along a guardrail, curbing the tire, or something).
 
Has anyone/big YouTubers ran a trolley problem with FSD Beta?

Havent found anything and im sure it would be a nice viral video if someone makes it (would probably cost a lot and require a stunt driver etc to make it realistic)

Really curious as how impossible scenarios are programed with FSD neural nets or if there is some sort of "governing ethics" code that is actually running (or if there is nothing running and FSD beta just gives up).

Things like two unavoidable crashes scenarios of the top of my head:

-one person/multiple people
-car vs human
-human vs animal
-car vs bus
etc
You do know that cars, unlike the trolley in the trolley problem, have brakes?