2
0

VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection

Abstract

Although fusing multiple sensor modalities can enhance object detection performance, existing fusion approaches often overlook subtle variations in environmental conditions and sensor inputs. As a result, they struggle to adaptively weight each modality under such variations. To address this challenge, we introduce Vision-Language Conditioned Fusion (VLC Fusion), a novel fusion framework that leverages a Vision-Language Model (VLM) to condition the fusion process on nuanced environmental cues. By capturing high-level environmental context such as as darkness, rain, and camera blurring, the VLM guides the model to dynamically adjust modality weights based on the current scene. We evaluate VLC Fusion on real-world autonomous driving and military target detection datasets that include image, LIDAR, and mid-wave infrared modalities. Our experiments show that VLC Fusion consistently outperforms conventional fusion baselines, achieving improved detection accuracy in both seen and unseen scenarios.

View on arXiv
@article{taparia2025_2505.12715,
  title={ VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection },
  author={ Aditya Taparia and Noel Ngu and Mario Leiva and Joshua Shay Kricheli and John Corcoran and Nathaniel D. Bastian and Gerardo Simari and Paulo Shakarian and Ransalu Senanayake },
  journal={arXiv preprint arXiv:2505.12715},
  year={ 2025 }
}
Comments on this paper