In an avoidable harmful situation, autonomous vehicles systems are expected to choose the course of action that causes the less damage to everybody. However, this behavioral protocol implies some predictability. In this context, we show that if the autonomous vehicle decision process is perfectly known then malicious, opportunistic, terrorist, criminal and non-civic individuals may have incentives to manipulate it. Consequently, some levels of uncertainty are necessary for the system to be manipulation proof. Uncertainty removes the misbehavior incentives because it increases the risk and likelihood of unsuccessful manipulation. However, uncertainty may also decrease the quality of the decision process with negative impact in terms of efficiency and welfare for the society. We also discuss other possible solutions to this problem. Keywords: Artificial intelligence; Autonomous vehicles; Manipulation; Malicious Behavior; Uncertainty. JEL classification: D81, L62, O32.
English
625 - Civil engineering of land transport. Railway engineering. Highway engineering
Vehicles autònoms
23 p.
Universitat Rovira i Virgili. Centre de Recerca en Economia Industrial i Economia Pública
Documents de treball del Departament d'Economia; 2019-06
L'accés als continguts d'aquest document queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons:http://creativecommons.org/licenses/by-nc-nd/4.0/