Rapid advances in autonomous vehicles have outpaced ethical and legal assessment as well as social adaptation. While large-scale surveys of laypeople have provided insights into crash dilemmas, these studies often capture unreasoned intuitions and overlook the perspectives of those who actually design decision rules. This project focuses on examining the ethics of crash-dilemma programming as practiced and justified by technically informed stakeholders such as programmers and engineers working on self-driving cars.
The study asks how these stakeholders conceptualize and justify trade-offs in crash situations, and which ethical principles and constraints - such as risk, bias, liability, and explainability - guide their choices. The research is situated within a theoretical framework that draws on normative ethics (consequentialism, deontology, virtue ethics), responsible research and innovation, and value-sensitive design. It is hypothesized that expert reasoning differs systematically from lay intuitions, that explicit principle-based reflection improves transparency and robustness, that awareness of bias and technical constraints increases demand for explainability and oversight, and that interdisciplinary exchange fosters safer and more accountable design choices.
Methodologically, the project combines qualitative, semi-structured “active” interviews with experimental-ethics vignettes and think-aloud tasks, using purposive sampling of Swiss autonomous vehicle developers. Data will be analyzed thematically and comparatively, and findings will feed into iterative co-design workshops with ethicists to develop a practical governance framework.
The expected outcome is a proactive and transparent framework for programming crash-situation algorithms. Such a framework aims to support safer deployment of autonomous vehicles, facilitate regulatory alignment, and strengthen public trust, thereby helping to realize their broader social benefits.