ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17076
48
0

Scene-Aware Location Modeling for Data Augmentation in Automotive Object Detection

23 April 2025
Jens Petersen
Davide Abati
A. Habibian
Auke Wiggers
    ViT
    3DPC
ArXivPDFHTML
Abstract

Generative image models are increasingly being used for training data augmentation in vision tasks. In the context of automotive object detection, methods usually focus on producing augmented frames that look as realistic as possible, for example by replacing real objects with generated ones. Others try to maximize the diversity of augmented frames, for example by pasting lots of generated objects onto existing backgrounds. Both perspectives pay little attention to the locations of objects in the scene. Frame layouts are either reused with little or no modification, or they are random and disregard realism entirely. In this work, we argue that optimal data augmentation should also include realistic augmentation of layouts. We introduce a scene-aware probabilistic location model that predicts where new objects can realistically be placed in an existing scene. By then inpainting objects in these locations with a generative model, we obtain much stronger augmentation performance than existing approaches. We set a new state of the art for generative data augmentation on two automotive object detection tasks, achieving up to 2.8×2.8\times2.8× higher gains than the best competing approach (+1.4+1.4+1.4 vs. +0.5+0.5+0.5 mAP boost). We also demonstrate significant improvements for instance segmentation.

View on arXiv
@article{petersen2025_2504.17076,
  title={ Scene-Aware Location Modeling for Data Augmentation in Automotive Object Detection },
  author={ Jens Petersen and Davide Abati and Amirhossein Habibian and Auke Wiggers },
  journal={arXiv preprint arXiv:2504.17076},
  year={ 2025 }
}
Comments on this paper