Promoting CAV Deployment by Enhancing the Perception Phase of Autonomous Driving Using Explainable AI

User trust is pivotal to autonomous vehicle (AV) operations which are driven by artificial intelligence (AI). A promising way to build user trust is to use explainable artificial intelligence (XAI) which requires the AI system to provide the user with the underlying explanations for its decisions. Motivated by the need to enhance user trust and the promise of novel XAI technology in this context, this study strives to enhance trustworthiness in autonomous driving systems through the development of explainable Deep Learning (DL) models. The study casts the AV decision-making process not as a classification task (which is the traditional process) but rather as an image-based language generation (image captioning) task. As such, the proposed approach makes driving decisions by first generating textual descriptions of the driving scenarios which serve as explanations that humans can understand. The first part of the research project developed a novel multi-modal DL architecture to jointly model the correlation between an image (driving scenario) and language (descriptions). It adopts a fully Transformer-based structure and therefore has the potential to perform global attention and imitate effectively, the learning processes of human drivers. The results suggest that the proposed model can and does generate legal and meaningful sentences to describe a given driving scenario, and subsequently to correctly generate appropriate AV driving decisions. The model significantly outperforms multiple baseline models in terms of generating explanations and driving actions. The second part of the research developed a framework for jointly predicting potential driving actions with corresponding explanations, thereby producing explainable DL models useful for trustable autonomous driving. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during AV system development. From the end user’s perspective, the proposed models can be beneficial in enhancing user trust because they provide the rationale behind an AV’s decisions/actions. From the AV developer’s perspective, the explanations from the explainable system could serve as a “debugging” tool to detect potential weaknesses in the existing system and to identify specific lacunas that need to be addressed.

Language

  • English

Media Info

  • Media Type: Digital/other
  • Edition: Final Report
  • Features: Appendices; Figures; Photos; References; Tables;
  • Pagination: 79p

Subject/Index Terms

Filing Info

  • Accession Number: 01908319
  • Record Type: Publication
  • Report/Paper Numbers: 74
  • Contract Numbers: 69A3551747105
  • Files: UTC, NTL, TRIS, USDOT
  • Created Date: Feb 15 2024 5:05PM