Semantic Segmentation for Traffic Scene Understanding Based on Mobile Networks

Real-time and reliable perception of the surrounding environment is an important prerequisite for advanced driving assistance system (ADAS) and automatic driving. And vision-based detection plays a significant role in environment perception for automatic vehicles. Although deep convolutional neural networks enable efficient recognition of various objects, it has difficulty in accurately detecting special vehicles, rocks, road pile, construction site, fence and so on. In this work, we address the task of traffic scene understanding with semantic image segmentation. Both driveable area and the classification of object can be attained from the segmentation result. First, we define 29 classes of objects in traffic scenarios with different labels and modify the Deeplab V2 network. Then in order to reduce the running time, MobileNet architecture is applied to generate the feature map instead of the original models. After that, the Cityscapes Dataset, which focuses on semantic understanding of urban street scenes, is used to train the network with the modified labels. Finally, we test the network and measure the performance. With the same network (Deeplab V2), VGG-16 and ResNet-101 are also tested. Consequently, we attain similar performance with MobileNet and ResNet-101 models, but using MobileNet requires much fewer operations and time. Compared with VGG-16, MobileNet architecture has better performance and is also more efficient. The using of lightweight mobile models reduce the computation and enable the on-device applications for semantic segmentation in traffic scene understanding.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01712769
  • Record Type: Publication
  • Source Agency: SAE International
  • Report/Paper Numbers: 2018-01-1600
  • Files: TRIS, SAE
  • Created Date: Jul 29 2019 11:02AM