Task-based environment interpretation and system architecture for next generation ADAS

State-of-the-art advanced driver assistance systems (ADAS) typically focus on single tasks and therefore, have clearly defined functionalities. Although said ADAS functions (e.g. lane departure warning) show good performance, they lack the general ability to extract spatial relations of the environment. These spatial relations are required for scene analysis on a higher layer of abstraction, providing a new quality of scene understanding, e.g. for inner-city crash prevention when trying to detect a Stop sign violation in a complex situation. Otherwise, it will be difficult for an ADAS to deal with complex scenes and situations in a generic way. This contribution presents a novel approach of task-dependent generation of spatial representations, allowing task-specific extraction of knowledge from the environment based on our biologically motivated ADAS. The approach also incorporates stored knowledge in form of digital map data, introducing a new way of eHorizon integration. Additionally, the hierarchy of the approach provides advantages when dealing with heterogeneous processing modules, a large number of tasks and additional new input cues. The results show the reliability of the approach and also the increase of performance on the system level.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01357062
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Nov 16 2011 2:51PM