34
2

MonoTAKD: Teaching Assistant Knowledge Distillation for Monocular 3D Object Detection

Abstract

Monocular 3D object detection (Mono3D) holds noteworthy promise for autonomous driving applications owing to the cost-effectiveness and rich visual context of monocular camera sensors. However, depth ambiguity poses a significant challenge, as it requires extracting precise 3D scene geometry from a single image, resulting in suboptimal performance when transferring knowledge from a LiDAR-based teacher model to a camera-based student model. To facilitate effective distillation, we introduce Monocular Teaching Assistant Knowledge Distillation (MonoTAKD), which proposes a camera-based teaching assistant (TA) model to transfer robust 3D visual knowledge to the student model, leveraging the smaller feature representation gap. Additionally, we define 3D spatial cues as residual features that capture the differences between the teacher and the TA models. We then leverage these cues to improve the student model's 3D perception capabilities. Experimental results show that our MonoTAKD achieves state-of-the-art performance on the KITTI3D dataset. Furthermore, we evaluate the performance on nuScenes and KITTI raw datasets to demonstrate the generalization of our model to multi-view 3D and unsupervised data settings. Our code is available atthis https URL.

View on arXiv
@article{liu2025_2404.04910,
  title={ MonoTAKD: Teaching Assistant Knowledge Distillation for Monocular 3D Object Detection },
  author={ Hou-I Liu and Christine Wu and Jen-Hao Cheng and Wenhao Chai and Shian-Yun Wang and Gaowen Liu and Hugo Latapie and Jhih-Ciang Wu and Jenq-Neng Hwang and Hong-Han Shuai and Wen-Huang Cheng },
  journal={arXiv preprint arXiv:2404.04910},
  year={ 2025 }
}
Comments on this paper