Please login first
XAI-Interpreter: A Dual-Attention Framework for Transparent and Explainable Decision-Making in Autonomous Vehicles
, , , *
1  AVL Türkiye Research and Engineering, İstanbul,Türkiye
Academic Editor: Stefano Mariani

https://doi.org/10.3390/ECSA-12-26531 (registering DOI)
Abstract:

Autonomous vehicles need to explain their actions to improve reliability and build user trust. This study focuses on enhancing the transparency and explainability of the decision-making process in such systems. A module named XAI-Interpreter is developed to identify and highlight the most influential factors in driving decisions. The module combines two complementary methods: Learned Attention Weights (LAW) and Object-Level Attention (OLA). In the LAW method, images captured from the ego vehicle’s front and rear cameras in the CARLA simulation environment are processed using the Faster R-CNN model for object detection. GRAD-CAM is then applied to generate visual attention heatmaps, showing which regions and objects in the images affect the model’s decisions. The OLA method analyzes nearby dynamic objects, such as other vehicles, based on their size, speed, position, and orientation relative to the ego vehicle. Each object receives a normalized attention score between 0 and 1, indicating its influence on the vehicle’s behavior. These scores can be used in downstream modules such as planning, control, and safety. The module is currently tested in simulation. Future work will involve deploying the system on real vehicles. By helping the vehicle focus on the most critical elements in its surroundings, the Explainable Artificial Intelligence (XAI)-Interpreter supports more transparent and explainable autonomous driving systems.

Keywords: Explainable Artificial Intelligence Interpreter, Learned Attention Weights, Object Level Attention

 
 
Top