Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Trolley Problem with FSD 12?

This site may earn commission on affiliate links.
Dont see how it's not a real problem. These types of accidents happen every day (and I dont mean the hypothetically of lining up 1 vs 3 people in the middle of the road)...lets say common accidents like oncoming traffic swerving into your lane while there are pedestrians on the side of the road

What does the car do? Hit the oncoming car or swerve into the pedestrians?
The decision for this human driver is you hit the oncoming car as obliquely as possible. Swerving into pedestrians is not an option, since pedestrians have almost ZERO chance against a car, whereas a car has crumple zones (unless it is a CT), emergency braking systems, and air bags.
 
Dont see how it's not a real problem. These types of accidents happen every day (and I dont mean the hypothetically of lining up 1 vs 3 people in the middle of the road)...lets say common accidents like oncoming traffic swerving into your lane while there are pedestrians on the side of the road

What does the car do? Hit the oncoming car or swerve into the pedestrians?

The trolley problem is just a thought exercise in human moral decision-making. The key takeaway from the trolley problem is that humans will often choose inaction rather than make a utility judgment, because action would make them feel more culpable.

Trolley problems are irrelevant to AVs because it's almost certainly not the AV's fault and the AV can do the basic expectation of a human driver: brake as fast as possible.

The good thing about AVs is that they will all have cameras that prove it was the baby's/child's/old woman's/pregnant woman's/nun's fault.
 
I said that already... Hand over control to the human driver who is supposed to be paying attention, ready to take over at all times, because this is not a level 3+ system.

Not sure why you are ignoring that fact, other than the fact you just want to have a hypothetical discussion.
Im not talking about Tesla's current implementation. Im talking about what they have promised with FSD for the past 5 years...robotaxies and drive me from point A to B with no human control expected.

Clearly this will be a reality sooner than later with FSD 12.3/4 so this is not a hypothetical discussion...there will be real world scenarios where there would be a clear choice between lesser of two evils, and Tesla's Level 4/5 AI WILL kill people with millions of robots taxies/FSD cars on the road already.

The question is will there be a high level AI making this choice, or as others have noted will it simply slam on the brakes and hope for the best like most humans do?
 
The issue with the trolley problem is that it WANTS to place the blame on the driver/decision maker for the deaths. That is so crappy that I feel like the one who came up with this is thinking like a female. Turns out it is indeed a female who thought of it and wrote it, and publicized by another female. No man will think like this, unless that man is just as wily and always trying to screw someone else.
 
Clearly this will be a reality sooner than later with FSD 12.3/4 so this is not a hypothetical discussion...there will be real world scenarios where there would be a clear choice between lesser of two evils, and Tesla's Level 4/5 AI WILL kill people with millions of robots taxies/FSD cars on the road already.
Could you do me a favor and locate a traffic accident where a driver had to choose between the lesser of two lethal evils? Surely that would be a pretty significant story. "Man Swerves To Avoid Baby, Kills Elderly Couple". I'd do it, but it sounds like a tedious chore because I've never heard of such a thing.

Multiple stories would be ideal.

Note that we have to exclude people doing stupid things, like stopping on a highway to save a baby squirrel only to have somebody plow into their car at 60 mph. That's a different trolley.
 
Could you do me a favor and locate a traffic accident where a driver had to choose between the lesser of two lethal evils?

I suppose this happens all the time, although the decision is based less on ethics ("what is the best overall outcome for humanity?") and more on practicality ("swerve or brake? SWERVE OR BRAKE? AAAAAHHHHHH").

Generally I choose "brake" because it's more deterministic.
 
  • Like
Reactions: enemji
Dont see how it's not a real problem. These types of accidents happen every day (and I dont mean the hypothetically of lining up 1 vs 3 people in the middle of the road)...lets say common accidents like oncoming traffic swerving into your lane while there are pedestrians on the side of the road

What does the car do? Hit the oncoming car or swerve into the pedestrians?
Really, every day? Then you should be able to find thousands of examples.

So find me a couple. I think I've ever heard of one or two, out of the 6 million crashes each year in the USA and the many tens of millions around the world.

It's not a thing. But people just love to imagine that it is, as this thread shows. Those of us who give talks on self-driving know the question will come (though over time people have slowed down doing that, but you still find them.)

Nobody actually developing the cars wastes time on it. They just work to prevent all accidents and follow the law. If the law wants to make rules about what to do in super-rare situations that effectively never happen, let the policymakers hammer it out, the developers will follow whatever rule they write, as long as it's not ridiculous.

Even if it were something that happened to human drivers, it would happen much less often with robocars. Their brakes will never fail. Never, because there are two redundant ones, tested hundreds of times a day. They don't go too fast around blind corners or drive where they don't have ROW except in very rare situations. (This is a problem for Teslas. In human driven cars, the human's foot strength is the main backup system though the electric parking brake can provide some stopping, though not as much as the main brakes.)

The engineers have 999 problems but a trolley switch ain't one.
 
Really, every day? Then you should be able to find thousands of examples.

So find me a couple. I think I've ever heard of one or two, out of the 6 million crashes each year in the USA and the many tens of millions around the world.

It's not a thing. But people just love to imagine that it is, as this thread shows. Those of us who give talks on self-driving know the question will come (though over time people have slowed down doing that, but you still find them.)

Nobody actually developing the cars wastes time on it. They just work to prevent all accidents and follow the law. If the law wants to make rules about what to do in super-rare situations that effectively never happen, let the policymakers hammer it out, the developers will follow whatever rule they write, as long as it's not ridiculous.

Even if it were something that happened to human drivers, it would happen much less often with robocars. Their brakes will never fail. Never, because there are two redundant ones, tested hundreds of times a day. They don't go too fast around blind corners or drive where they don't have ROW except in very rare situations. (This is a problem for Teslas. In human driven cars, the human's foot strength is the main backup system though the electric parking brake can provide some stopping, though not as much as the main brakes.)

The engineers have 999 problems but a trolley switch ain't one.

You answered your own question. 6 millions crashes every year means that the probability of such occurrences are in the 10s of thousands if not 100s of thousands. You are the one imagining that it's NOT a thing.

Just because robo cars will be safe does not mean those 6 million crashes will go down to 1 the next year. There is another human on the other side of the crash and until ALL cars are robo cars 6 millions crashes will still occur and probability of it involving a Tesla robot taxi will be high.

Just like no one cares about the millions of gas car fires today, and everyone reports on the one Tesla fire it will be the same thing with these scenarios.

People get killed in car crashes every day but no one reports it because it happens so often. Will be all over the news when an FSD car crashes into a person.

And here is an article from a quick google search that comes up with hundreds of the reported "imaginary" cases as you put it which im sure are thousands that are not reported:

 
Things are not always as they seem (or reported)...here is an update to that story:


The data taken from Wilson’s vehicle showed that a tenth of a second before the crash, his car was traveling "at 60 mph with the service brake on, no ABS, and stability control engaged." Police calculated that about four and a half seconds before the crash, he was going between 92 and more than 99 mph.

Basically the root cause here was that the person driving was driving in excess of 90mph when they swerved to avoid a car that had pulled out in front of them. So really, there is no Trolley Problem here--it's a case of a reckless driver exceeding a safe and controllable operating speed and the driver lost control when he tried to avoid an accident that was mostly his own doing. If this had been a robotaxi, the entire incident probably could have been avoided.
 
  • Like
Reactions: Ben W and Dewg
You answered your own question. 6 millions crashes every year means that the probability of such occurrences are in the 10s of thousands if not 100s of thousands. You are the one imagining that it's NOT a thing.

Just because robo cars will be safe does not mean those 6 million crashes will go down to 1 the next year. There is another human on the other side of the crash and until ALL cars are robo cars 6 millions crashes will still occur and probability of it involving a Tesla robot taxi will be high.

Just like no one cares about the millions of gas car fires today, and everyone reports on the one Tesla fire it will be the same thing with these scenarios.

People get killed in car crashes every day but no one reports it because it happens so often. Will be all over the news when an FSD car crashes into a person.

And here is an article from a quick google search that comes up with hundreds of the reported "imaginary" cases as you put it which im sure are thousands that are not reported:

In addition to the comment below, I see nothing in the crash report to suggest that the driver considered whether he should deliberately hit the woman and child or the other car. I suspect that driver in fact probably didn't want to hit the stroller and did so through error.

In the trolley problem you have a choice of killing one person or another (or more specifically a group) and you make some sort of decision as to which. It is not possible to simply hit none of them (probably the case here) and also the situation is not of your making (not the case here.)
 
In addition to the comment below, I see nothing in the crash report to suggest that the driver considered whether he should deliberately hit the woman and child or the other car. I suspect that driver in fact probably didn't want to hit the stroller and did so through error.

In the trolley problem you have a choice of killing one person or another (or more specifically a group) and you make some sort of decision as to which. It is not possible to simply hit none of them (probably the case here) and also the situation is not of your making (not the case here.)
The trolley problem also takes place on tracks, so one only has a choice of two specific, unalterable paths. Such is not the case with a car. Also, the choices in the trolley problem is a choice between one 100% fatality, or multiple 100% fatalities. In a scenario with a hitting a car versus hitting a pedestrian, the choice will always be the car, since the pedestrian has ZERO protections, whereas cars have multiple mitigation devices (airbags, crumple zones, collapsible steering wheel, etc).
 
The trolley problem also takes place on tracks, so one only has a choice of two specific, unalterable paths. Such is not the case with a car. Also, the choices in the trolley problem is a choice between one 100% fatality, or multiple 100% fatalities. In a scenario with a hitting a car versus hitting a pedestrian, the choice will always be the car, since the pedestrian has ZERO protections, whereas cars have multiple mitigation devices (airbags, crumple zones, collapsible steering wheel, etc).
That is all true.

However, I do think we can extrapolate the trolley problem to a more realistic real-world problem statement as it would apply to an autonomous vehicle (acknowledging that this is NOT the formal trolley problem).

The scenario would be a situation where a collision is imminent and unavoidable. Perhaps the obstacles in the question appeared too late for a braking maneuver to avoid. A kid runs out into the street from behind a parked car for example, or an oncoming vehicle swerves into your lane of travel at the very last moment. Additionally, there is a possible swerving escape maneuver that the vehicle could make to avoid colliding with one of the obstacles, but which would force a collision with the other. And finally, let's suppose that the two possible objects the vehicle would collide with are of different types. Examples: car and pedestrian; car and 20 ton dump truck.

The question at hand then is this: would the autonomous vehicle choose a path that:

a) Causes the least damage to itself and its occupants (i.e. swerve to the smaller object)
b) Avoids unprotected obstacles, even if it means damage to the vehicle itself and risk of injury to the occupants (i.e. avoid pedestrians, animals, bicycles, motorcycles)
c) Avoids potentially catastrophic collisions with vehicles of a larger size class than the AV so as to minimize risk to occupants

I think the only realistic/legal option is this:

d) Obey all traffic laws (i.e. do its best to not cross the yellow line) and avoid leaving the roadway, while using braking and swerving into unobstructed lanes of travel to minimize the force of the unavoidable impact.

(d) may result in hitting that kid that jumped out from in front of the car, but as sad as it may be, the kid was in the wrong and the car would not intentionally cross the yellow line knowingly colliding with an oncoming vehicle (which for all we know may have a nun, grandmother, future president and 5 kids inside).
 
  • Like
Reactions: JB47394
I think the only realistic/legal option is this:

d) Obey all traffic laws (i.e. do its best to not cross the yellow line) and avoid leaving the roadway, while using braking and swerving into unobstructed lanes of travel to minimize the force of the unavoidable impact.

(d) may result in hitting that kid that jumped out from in front of the car, but as sad as it may be, the kid was in the wrong and the car would not intentionally cross the yellow line knowingly colliding with an oncoming vehicle (which for all we know may have a nun, grandmother, future president and 5 kids inside).

The purpose of the trolley problem is to consider the ethics of actively committing lesser harm versus passively committing greater harm. The answer is always to facilitate the lesser harm. In that spirit, breaking the law of crossing the center line is certainly a lesser harm than striking a pedestrian, so I don't believe that a strict "legal" solution is appropriate. It's simpler to implement, but it'll kill people to no purpose.

As a simple example of how the law isn't an absolute, consider that we frequently break the law by crossing the center divider to go around stopped vehicles in cities, or to pass bikers on country roads. The purpose of the law is to facilitate safe travel using our vehicles, and where the law doesn't accomplish that, we happily violate it, and police won't take us to task for it.

So we'll need the system to be able to evaluate least harm. That should be an interesting process.
 
  • Like
Reactions: CyberGus
The purpose of the trolley problem is to consider the ethics of actively committing lesser harm versus passively committing greater harm. The answer is always to facilitate the lesser harm. In that spirit, breaking the law of crossing the center line is certainly a lesser harm than striking a pedestrian, so I don't believe that a strict "legal" solution is appropriate. It's simpler to implement, but it'll kill people to no purpose.

As a simple example of how the law isn't an absolute, consider that we frequently break the law by crossing the center divider to go around stopped vehicles in cities, or to pass bikers on country roads. The purpose of the law is to facilitate safe travel using our vehicles, and where the law doesn't accomplish that, we happily violate it, and police won't take us to task for it.

So we'll need the system to be able to evaluate least harm. That should be an interesting process.
Let me ask you this then...suppose there is a hypothetical situation: an AV is driving down a street with an vehicle in the oncoming lane. At the last second, a kid jumps out in front of the AV without enough time to stop, but possibly the AV could swerve into the oncoming vehicle in order to save the kid. So the "ethical" AV does so, but winds up severely injuring either the occupants of the oncoming vehicle or the occupants of the AV (maybe it's a robotaxi and not even their vehicle).

Do you think the provider of the AV vehicle will be sued? You better believe they will. So then the question would be: are they liable for any damages? At a minimum, if it were a human driver, they would be ticketed for crossing the yellow line, even though there was an extenuating circumstance (and a judge may go "easy" on them as a result). But in the case where the defendant in the case is a robotaxi company, I'm not so sure. I think the law would be fairly strictly applied. This is not a case of crossing the yellow line when the coast is clear to avoid a parked car or bicyclist. I still think that the AV would simply do its best to slam on the brakes, but stay in the lane. Nobody is going to program an algorithm to make ethical decisions unless the law clearly proscribes what the decision making process looks like. If as a society we decide that if this scenario occurs the vehicle should cross the yellow line at all costs to avoid hitting the unprotected human, then fine...but we no longer have a choice: we MUST cross the yellow line.

The alternative outcome is that the car stays in its lane, possibly striking the kid. In this case, the kid would be found at fault and the AV company would be not be held liable (as long as it was obeying the speed limit and all other traffic laws. It's a sad result, but it's the only one that releases the AV company from liability.
 
  • Like
Reactions: CyberGus
So we'll need the system to be able to evaluate least harm. That should be an interesting process.

Some outcomes are more deterministic, so how to account for the unknown variables?

For instance, if someone cuts me off, I can choose to brake hard and take the collision with a fairly definable outcome, since damage and injury can be reasonably predicted based on speed of impact. Fault would lie with the other driver.

However, if I choose to swerve, I might avoid any damage at all, or I might just cause a different collision, which would be deemed my fault. 🤔

Somewhere, Asimov is ROFLing
 
It's a sad result, but it's the only one that releases the AV company from liability.

For the robotaxi model to work, there will need to be limits on liability. The only reason drivers don't clog the the courts with lawsuits is that most people lack sufficient insurance and/or net worth to be tasty targets. Lawyers will line up to go after a deep-pocketed corporation, however.
 
  • Like
Reactions: RTPEV
In a scenario with a hitting a car versus hitting a pedestrian, the choice will always be the car, since the pedestrian has ZERO protections, whereas cars have multiple mitigation devices (airbags, crumple zones, collapsible steering wheel, etc).
There’s an interesting variation on the trolley problem where the two [and only two] options are to hit a bicyclist who’s wearing a helmet, or to hit a bicyclist who’s not wearing a helmet. By the above logic the car should hit the helmet-wearing bicyclist [since they’re more likely to survive], but would that unethically penalize the helmet-wearer for following the law? Might bicyclists be incentivized to stop wearing helmets because of this? This starts getting into prisoner’s dilemma territory.
 
  • Like
Reactions: RTPEV
Let me ask you this then...
If we structure our ethics around our laws instead of the other way around, we're doomed. If the lesser harm to society is deemed to be to kill the child, then structure the laws that way. If the lesser harm to society is to put a couple people in the hospital, then structure the laws that way. Structure the laws as you see fit, and reap the fruits of your choices. The fundamental premise is least harm.

but would that unethically penalize the helmet-wearer for following the law?
Sure. So rejigger your laws and the rules of the robotaxi to produce the outcome you're after. Make robotaxis ignore whether someone is wearing a helmet because they're supposed to be wearing one. So now it's down to an egalitarian 50/50 chance to be hit. That's saying that the "fairness" of that scenario is the least harm to society.
 
  • Like
Reactions: RTPEV and Ben W