ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18060
70
1
v1v2v3 (latest)

Semantic Correspondence: Unified Benchmarking and a Strong Baseline

23 May 2025
Kaiyan Zhang
Xinghui Li
Jingyi Lu
Kai Han
    3DV
ArXiv (abs)PDFHTML
Main:16 Pages
10 Figures
Bibliography:4 Pages
1 Tables
Appendix:2 Pages
Abstract

Establishing semantic correspondence is a challenging task in computer vision, aiming to match keypoints with the same semantic information across different images. Benefiting from the rapid development of deep learning, remarkable progress has been made over the past decade. However, a comprehensive review and analysis of this task remains absent. In this paper, we present the first extensive survey of semantic correspondence methods. We first propose a taxonomy to classify existing methods based on the type of their method designs. These methods are then categorized accordingly, and we provide a detailed analysis of each approach. Furthermore, we aggregate and summarize the results of methods in literature across various benchmarks into a unified comparative table, with detailed configurations to highlight performance variations. Additionally, to provide a detailed understanding on existing methods for semantic matching, we thoroughly conduct controlled experiments to analyse the effectiveness of the components of different methods. Finally, we propose a simple yet effective baseline that achieves state-of-the-art performance on multiple benchmarks, providing a solid foundation for future research in this field. We hope this survey serves as a comprehensive reference and consolidated baseline for future development. Code is publicly available at: this https URL.

View on arXiv
@article{zhang2025_2505.18060,
  title={ Semantic Correspondence: Unified Benchmarking and a Strong Baseline },
  author={ Kaiyan Zhang and Xinghui Li and Jingyi Lu and Kai Han },
  journal={arXiv preprint arXiv:2505.18060},
  year={ 2025 }
}
Comments on this paper