Multimodal Perception Integrating Point Cloud and Light Field for Ship Autonomous Driving

Robust scene perception is an essential prerequisite to ensure the reliability in ship autonomous driving. However, it is a challenging task in inland river because of the complicated and changeable environment as well as high-density ships in narrow waterway. As one of the primary technologies, obstacle trajectory locating and tracking has been widely explored in recent years. Current approaches strictly rely on lidar as only depth awareness sensor and the limited measurement range severely restricts them for distant object identification. On this account, the authors creatively propose a point cloud-light field fusion perception framework in this paper for the first time. Specifically, in detection stage, the former undertakes precise close object perception and the latter completes distant object locating through light field stereo matching. In tracking stage, a novel four-phase data association that combines multiple attributes from position, point cloud and image domains is utilized for accurate object matching across frames. To validate the effectiveness of the multimodal perception strategy, the authors implement an acquisition system consisting of two lidars and four sets of simplified light field cameras on a ship to conduct actual testing. Extensive experimental results show that the proposed framework achieves superior 3D object locating and tracking performance, far surpassing the state-of-the-art methods in terms of accuracy and real-time.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01938879
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Dec 6 2024 2:15PM