Not sure if you guys have seen this work from Jack Stilgoe from 2021, but it's pretty spot on imho:
" How can we know a self-driving car is safe?
Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about thesafety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safeenough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant nar-ratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics inan attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance andsees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-drivingsystem does. The second is also concerned with why systems do what they do and how they should be tested. Using insightsfrom workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: theintended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘Howsafe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’"
Favorite quotes from interviewees:
"Very few Silicon Valley companies have ever had to ship a safety critical thing..."
"I’m not sure I’d be rude to the AI people but often all of them working in this area don’t understand a lot of standard safety engineering”
" How can we know a self-driving car is safe?
Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about thesafety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safeenough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant nar-ratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics inan attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance andsees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-drivingsystem does. The second is also concerned with why systems do what they do and how they should be tested. Using insightsfrom workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: theintended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘Howsafe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’"
How can we know a self-driving car is safe? - Ethics and Information Technology
Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safe enough?’ Drawing on 50 interviews with people...
link.springer.com
Favorite quotes from interviewees:
"Very few Silicon Valley companies have ever had to ship a safety critical thing..."
"I’m not sure I’d be rude to the AI people but often all of them working in this area don’t understand a lot of standard safety engineering”