Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability

Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public’s trust in AVs. Many factors can influence people’s trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people’s dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his/her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), the authors were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, the authors' findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01767730
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Jan 20 2021 3:16PM