37
2

Self-Localized Collaborative Perception

Abstract

Collaborative perception has garnered considerable attention due to its capacity to address several inherent challenges in single-agent perception, including occlusion and out-of-range issues. However, existing collaborative perception systems heavily rely on precise localization systems to establish a consistent spatial coordinate system between agents. This reliance makes them susceptible to large pose errors or malicious attacks, resulting in substantial reductions in perception performance. To address this, we propose~CoBEVGlue\mathtt{CoBEVGlue}, a novel self-localized collaborative perception system, which achieves more holistic and robust collaboration without using an external localization system. The core of~CoBEVGlue\mathtt{CoBEVGlue} is a novel spatial alignment module, which provides the relative poses between agents by effectively matching co-visible objects across agents. We validate our method on both real-world and simulated datasets. The results show that i) CoBEVGlue\mathtt{CoBEVGlue} achieves state-of-the-art detection performance under arbitrary localization noises and attacks; and ii) the spatial alignment module can seamlessly integrate with a majority of previous methods, enhancing their performance by an average of 57.7%57.7\%. Code is available at https://github.com/VincentNi0107/CoBEVGlue

View on arXiv
Comments on this paper