ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13669
12
0

GeoVLM: Improving Automated Vehicle Geolocalisation Using Vision-Language Matching

19 May 2025
Barkin Dagda
Muhammad Awais
Saber Fallah
ArXivPDFHTML
Abstract

Cross-view geo-localisation identifies coarse geographical position of an automated vehicle by matching a ground-level image to a geo-tagged satellite image from a database. Despite the advancements in Cross-view geo-localisation, significant challenges still persist such as similar looking scenes which makes it challenging to find the correct match as the top match. Existing approaches reach high recall rates but they still fail to rank the correct image as the top match. To address this challenge, this paper proposes GeoVLM, a novel approach which uses the zero-shot capabilities of vision language models to enable cross-view geo-localisation using interpretable cross-view language descriptions. GeoVLM is a trainable reranking approach which improves the best match accuracy of cross-view geo-localisation. GeoVLM is evaluated on standard benchmark VIGOR and University-1652 and also through real-life driving environments using Cross-View United Kingdom, a new benchmark dataset introduced in this paper. The results of the paper show that GeoVLM improves retrieval performance of cross-view geo-localisation compared to the state-of-the-art methods with the help of explainable natural language descriptions. The code is available atthis https URL

View on arXiv
@article{dagda2025_2505.13669,
  title={ GeoVLM: Improving Automated Vehicle Geolocalisation Using Vision-Language Matching },
  author={ Barkin Dagda and Muhammad Awais and Saber Fallah },
  journal={arXiv preprint arXiv:2505.13669},
  year={ 2025 }
}
Comments on this paper