ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06003
54
0

Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models

8 March 2025
Md Azim Khan
A. Gangopadhyay
Jianwu Wang
Robert F. Erbacher
    VLM
ArXivPDFHTML
Abstract

Situational awareness applications rely heavily on real-time processing of visual and textual data to provide actionable insights. Vision language models (VLMs) have become essential tools for interpreting complex environments by connecting visual inputs with natural language descriptions. However, these models often face computational challenges, especially when required to perform efficiently in real environments. This research presents a novel vision language model (VLM) framework that leverages frequency domain transformations and low-rank adaptation (LoRA) to enhance feature extraction, scalability, and efficiency. Unlike traditional VLMs, which rely solely on spatial-domain representations, our approach incorporates Discrete Fourier Transform (DFT) based low-rank features while retaining pretrained spatial weights, enabling robust performance in noisy or low visibility scenarios. We evaluated the proposed model on caption generation and Visual Question Answering (VQA) tasks using benchmark datasets with varying levels of Gaussian noise. Quantitative results demonstrate that our model achieves evaluation metrics comparable to state-of-the-art VLMs, such as CLIP ViT-L/14 and SigLIP. Qualitative analysis further reveals that our model provides more detailed and contextually relevant responses, particularly for real-world images captured by a RealSense camera mounted on an Unmanned Ground Vehicle (UGV).

View on arXiv
@article{khan2025_2503.06003,
  title={ Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models },
  author={ Md Azim Khan and Aryya Gangopadhyay and Jianwu Wang and Robert F. Erbacher },
  journal={arXiv preprint arXiv:2503.06003},
  year={ 2025 }
}
Comments on this paper