Universitat Politècnica de Catalunya. Departament de Ciències de la Computació
Universitat Politècnica de Catalunya. Doctorat en Intel·ligència Artificial
Barcelona Supercomputing Center
Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group
2025
The opacity of decision-making in autonomous vehicles, rooted in the use of accurate yet complex AI models, has created barriers to their societal trust and regulatory acceptance, raising the need for explainability. We propose a post-hoc, model-agnostic solution to provide teleological expla-nations of vehicle behaviour in urban environments. Based on an existing explainability method called Intention-aware Policy Graphs, our approach enables the extraction of interpretable and reliable explanations of vehicle behaviour in the nuScenes dataset from global and local perspectives. We demonstrate how these explanations can be used to verify whether the vehicle operates within acceptable legal boundaries and to reveal potential vulnerabilities in autonomous driving datasets and models.
This work is partially funded by the European Commission through the AI4CCAM project (Trustworthy AI for Connected, Cooperative Automated Mobility) under grant agreement No 101076911. Additionally, this work is supported by the AI4S fellowship awarded to Sara Montese as part of the “Generacion D” initiative, Red.es, Ministerio para la Transformación Digital y de la Función Pública, for talent attraction (C005/24-ED CV1). Funded by the European Union NextGenerationEU funds, through PRTR.
Peer Reviewed
Postprint (author's final draft)
Conference report
Inglés
Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial; Explainable AI; Autonomous driving; Policy graphs; Intentions; Human-centric XAI
Institute of Electrical and Electronics Engineers (IEEE)
https://ieeexplore.ieee.org/abstract/document/11097511
Open Access
E-prints [72986]