Trusting the X in XAI: Effects of different types of explanations by a self-driving car on trust, explanation satisfaction and mental models

There is an increasing demand for opaque intelligent systems to explain themselves to humans, in order to increase user trust and the formation of adequate mental models. Previous research has shown effects of different types of explanations on user preferences and performance. However, this research has not addressed the differential effects of intentional and causal explanations on both users’ trust and mental models, nor has it employed multiple trust measurement scales at multiple points in time. In the current research, the effects of three types of explanations (causal, intentional, mixed) on trust development, mental models, and user satisfaction were investigated in the context of a self-driving car. Results showed that participants were least satisfied with causal explanations, that intentional explanations were most effective in establishing high levels of trust, and that mixed explanations led to the best functional understanding of the system and resulted in the least changes in trust over time.


  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01766784
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Feb 10 2021 3:11PM