ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.14679
106
1
v1v2 (latest)

Quantifying Context Bias in Domain Adaptation for Object Detection

23 September 2024
Hojun Son
Asma Almutairi
Author Contacts:
hojunson@umich.eduasmaalm@umich.edukusari@umich.edu
    AI4CE
ArXiv (abs)PDFHTML
Main:8 Pages
145 Figures
Bibliography:3 Pages
305 Tables
Appendix:178 Pages
Abstract

Domain adaptation for object detection (DAOD) aims to transfer a trained model from a source to a target domain. Various DAOD methods exist, some of which minimize context bias between foreground-background associations in various domains. However, no prior work has studied context bias in DAOD by analyzing changes in background features during adaptation and how context bias is represented in different domains. Our research experiment highlights the potential usability of context bias in DAOD. We address the problem by varying activation values over different layers of trained models and by masking the background, both of which impact the number and quality of detections. We then use one synthetic dataset from CARLA and two different versions of real open-source data, Cityscapes and Cityscapes foggy, as separate domains to represent and quantify context bias. We utilize different metrics such as Maximum Mean Discrepancy (MMD) and Maximum Variance Discrepancy (MVD) to find the layer-specific conditional probability estimates of foreground given manipulated background regions for separate domains. We demonstrate through detailed analysis that understanding of the context bias can affect DAOD approach and foc

View on arXiv
@article{son2025_2409.14679,
  title={ Quantifying Context Bias in Domain Adaptation for Object Detection },
  author={ Hojun Son and Asma Almutairi and Arpan Kusari },
  journal={arXiv preprint arXiv:2409.14679},
  year={ 2025 }
}
Comments on this paper