A Multifeature-Assisted Road and Vehicle Detection Method Based on Monocular Depth Estimation and Refined U-V Disparity Mapping

Usually, the detection process of traffic objects, such as vehicles, finds the visual input lacks the necessary depth information, so it is difficult to directly and quickly obtain the results. Typically, to separate the objects from the complex background, it is necessary to utilize a complex model or prior knowledge, which can be computationally expensive or simply infeasible. To battle this issue, depth visual information is used in this paper to accurately segment roads and vehicles, so it doesn’t need to use complex models to detect objects in the visual input. First, an unsupervised deep learning-based monocular depth estimation method is used to obtain the stereo disparity map. Then a non-parametric, refined U-V disparity mapping method is used to obtain the road region of interest. Next, this paper uses the road parallel scanning to determine the source and vanishing points and uses the adjacent disparity similarity algorithm to complement and extract the target region to detect roads and vehicles. This algorithm uses multi-feature fusion such as height-width ratio, perspective ratio, and area ratio to accurately segment the target region, and the effectiveness of the proposed method is tested on a public dataset. The experimental results show that the proposed model can accurately and efficiently detect roads and vehicles in a variety of scenarios.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01865683
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Nov 28 2022 3:40PM