16
0

Feature Complementation Architecture for Visual Place Recognition

Main:15 Pages
5 Figures
Bibliography:6 Pages
4 Tables
Abstract

Visual place recognition (VPR) plays a crucial role in robotic localization and navigation. The key challenge lies in constructing feature representations that are robust to environmental changes. Existing methods typically adopt convolutional neural networks (CNNs) or vision Transformers (ViTs) as feature extractors. However, these architectures excel in different aspects -- CNNs are effective at capturing local details. At the same time, ViTs are better suited for modeling global context, making it difficult to leverage the strengths of both. To address this issue, we propose a local-global feature complementation network (LGCN) for VPR which integrates a parallel CNN-ViT hybrid architecture with a dynamic feature fusion module (DFM). The DFM performs dynamic feature fusion through joint modeling of spatial and channel-wise dependencies. Furthermore, to enhance the expressiveness and adaptability of the ViT branch for VPR tasks, we introduce lightweight frequency-to-spatial fusion adapters into the frozen ViT backbone. These adapters enable task-specific adaptation with controlled parameter overhead. Extensive experiments on multiple VPR benchmark datasets demonstrate that the proposed LGCN consistently outperforms existing approaches in terms of localization accuracy and robustness, validating its effectiveness and generalizability.

View on arXiv
@article{wang2025_2506.12401,
  title={ Feature Complementation Architecture for Visual Place Recognition },
  author={ Weiwei Wang and Meijia Wang and Haoyi Wang and Wenqiang Guo and Jiapan Guo and Changming Sun and Lingkun Ma and Weichuan Zhang },
  journal={arXiv preprint arXiv:2506.12401},
  year={ 2025 }
}
Comments on this paper