ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07375
23
0

DINO-CoDT: Multi-class Collaborative Detection and Tracking with Vision Foundation Models

9 June 2025
Xunjie He
Christina Dao Wen Lee
Meiling Wang
Chengran Yuan
Zefan Huang
Yufeng Yue
Marcelo H. Ang Jr
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:2 Pages
Abstract

Collaborative perception plays a crucial role in enhancing environmental understanding by expanding the perceptual range and improving robustness against sensor failures, which primarily involves collaborative 3D detection and tracking tasks. The former focuses on object recognition in individual frames, while the latter captures continuous instance tracklets over time. However, existing works in both areas predominantly focus on the vehicle superclass, lacking effective solutions for both multi-class collaborative detection and tracking. This limitation hinders their applicability in real-world scenarios, which involve diverse object classes with varying appearances and motion patterns. To overcome these limitations, we propose a multi-class collaborative detection and tracking framework tailored for diverse road users. We first present a detector with a global spatial attention fusion (GSAF) module, enhancing multi-scale feature learning for objects of varying sizes. Next, we introduce a tracklet RE-IDentification (REID) module that leverages visual semantics with a vision foundation model to effectively reduce ID SWitch (IDSW) errors, in cases of erroneous mismatches involving small objects like pedestrians. We further design a velocity-based adaptive tracklet management (VATM) module that adjusts the tracking interval dynamically based on object motion. Extensive experiments on the V2X-Real and OPV2V datasets show that our approach significantly outperforms existing state-of-the-art methods in both detection and tracking accuracy.

View on arXiv
@article{he2025_2506.07375,
  title={ DINO-CoDT: Multi-class Collaborative Detection and Tracking with Vision Foundation Models },
  author={ Xunjie He and Christina Dao Wen Lee and Meiling Wang and Chengran Yuan and Zefan Huang and Yufeng Yue and Marcelo H. Ang Jr },
  journal={arXiv preprint arXiv:2506.07375},
  year={ 2025 }
}
Comments on this paper