Stop at red? Engineering meets ethics

Abstract

Over the past few years, artificial intelligence has fueled a revolution in several scientific fields. Intelligent agents can now give medical advice, translate spoken language, recommend news, and drive different types of vehicles, to name but a few. Some of these agents need to interact with humans and, hence, need to adhere to their social norms. Safety engineers have always worked with critical systems in which catastrophic failures can occur. They need to make moral decisions in order to keep the system under some acceptable risk level. In this paper, we will propose an approach to give a value to contrary-to-duty behaviors by introducing a risk aversion factor. We will make use of decision theory with uncertain consequences together with a risk matrix used by safety engineers. We will successfully exemplify this approach with the problem in which an autonomous car needs to decide whether to run a red light or not.

Publication
In the International Conference on Computer Ethics
Ignacio D. Lopez-Miguel
Ignacio D. Lopez-Miguel
PhD student

I am a PhD student at the Technical University of Vienna (TU Wien)