Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction

Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. The authors' evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, the authors show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, the authors show online simulations for the crossing of complex (unsignalized) intersections. The authors can demonstrate that their approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01666343
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Mar 22 2018 3:16PM