GAN-Based Day-to-Night Image Style Transfer for Nighttime Vehicle Detection

Data augmentation plays a crucial role in training a CNN-based detector. Most previous approaches were based on using a combination of general image-processing operations and could only produce limited plausible image variations. Recently, GAN (Generative Adversarial Network) based methods have shown compelling visual results. However, they are prone to fail at preserving image-objects and maintaining translation consistency when faced with large and complex domain shifts, such as day-to-night. In this paper, the authors propose AugGAN, a GAN-based data augmenter which could transform on-road driving images to a desired domain while image-objects would be well-preserved. The contribution of this work is three-fold: (1) they design a structure-aware unpaired image-to-image translation network which learns the latent data transformation across different domains while artifacts in the transformed images are greatly reduced; (2) they quantitatively prove that the domain adaptation capability of a vehicle detector is not limited by its training data; (3) their object-preserving network provides significant performance gain in the difficult day-to-night case in terms of vehicle detection. AugGAN could generate more visually plausible images compared to competing methods on different on-road image translation tasks across domains. In addition, the authors quantitatively evaluate different methods by training Faster R-CNN and YOLO with datasets generated from the transformed results and demonstrate significant improvement on the object detection accuracies by using the proposed AugGAN model.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01768811
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Feb 19 2021 1:57PM