Revisiting Batch Normalization For Practical Domain Adaptation
- OOD

Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common (yet inconvenient) practice to prepare at least tens of thousands of labeled image to fine-tune a network on every task before the model is ready to use. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization(AdaBN), to increase the generalization ability of a DNN. Our approach is based on the well-known Batch Normalization technique which has become a standard component in modern deep learning. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
View on arXiv