2
0

Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping

Abstract

We present Sat2Sound, a multimodal representation learning framework for soundscape mapping, designed to predict the distribution of sounds at any location on Earth. Existing methods for this task rely on satellite image and paired geotagged audio samples, which often fail to capture the diversity of sound sources at a given location. To address this limitation, we enhance existing datasets by leveraging a Vision-Language Model (VLM) to generate semantically rich soundscape descriptions for locations depicted in satellite images. Our approach incorporates contrastive learning across audio, audio captions, satellite images, and satellite image captions. We hypothesize that there is a fixed set of soundscape concepts shared across modalities. To this end, we learn a shared codebook of soundscape concepts and represent each sample as a weighted average of these concepts. Sat2Sound achieves state-of-the-art performance in cross-modal retrieval between satellite image and audio on two datasets: GeoSound and SoundingEarth. Additionally, building on Sat2Sound's ability to retrieve detailed soundscape captions, we introduce a novel application: location-based soundscape synthesis, which enables immersive acoustic experiences. Our code and models will be publicly available.

View on arXiv
@article{khanal2025_2505.13777,
  title={ Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping },
  author={ Subash Khanal and Srikumar Sastry and Aayush Dhakal and Adeel Ahmad and Nathan Jacobs },
  journal={arXiv preprint arXiv:2505.13777},
  year={ 2025 }
}
Comments on this paper