When both human and machine drivers make mistakes: Whom to blame?

The advent of automated and algorithmic technology requires people to consider them when assigning responsibility for something going wrong. The authors focus on a focal question: who or what should be responsible when both human and machine drivers make mistakes in human–machine shared-control vehicles? The authors examined human judgments of responsibility for automated vehicle (AV) crashes (e.g., the 2018 Uber AV crash) caused by the distracted test driver and malfunctioning automated driving system, through a sequential mixed-methods design: a text analysis of public comments after the first trial of the Uber case (Study 1) and vignette-based experiment (Study 2). Studies 1 and 2 found that although people assigned more responsibility to the test driver than the car manufacturer, the car manufacturer is not clear of responsibility from their perspective, which is against the Uber case’s jury decision that the test driver was the only one facing criminal charges. Participants allocated equal responsibility to the normal driver and car manufacturer in Study 2. In Study 1, people gave different and sometimes antagonistic reasons for their judgments. Some commented that human drivers in AVs will inevitably feel bored and reduce vigilance and attention when the automated driving system is operating (called “passive error”), whereas others thought the test driver can keep attentive and should not be distracted (called “active error”). Study 2’s manipulation of passive and active errors, however, did not influence responsibility judgments significantly. The authors' results might offer insights for building a socially-acceptable framework for responsibility judgments for AV crashes.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01879436
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Apr 18 2023 5:04PM