97
0

IV-tuning: Parameter-Efficient Transfer Learning for Infrared-Visible Tasks

Abstract

Various infrared-visible (IR-VIS) tasks greatly benefit from the advantage of combining infrared and visible modalities. Driven by the motivation that streamlining the infrared flow and harnessing PVMs with fewer parameters for superior performance, we propose "IV-tuning", a novel and general fine-tuning approach, to parameter-efficiently harness PVMs for various infrared-visible downstream tasks. At its core, IV-tuning freezes pre-trained visible-based PVMs and integrates infrared flow into modal prompts to interact with adapters, which achieves a more efficient and general modal interaction paradigm. By fine-tuning approximately 3% of the backbone parameters, IV-tuning outperforms full fine-tuning and previous state-of-the-art methods across multiple baselines in multiple tasks, including IR-VIS salient object detection, semantic segmentation and object detection. Extensive experiments demonstrate that IV-tuning achieves superior performance with fewer trainable parameters, providing a good alternative to full fine-tuning and a novel method of extending visible-based models for infrared-visible tasks. The code will be provided in supplementary material.

View on arXiv
@article{zhang2025_2412.16654,
  title={ IV-tuning: Parameter-Efficient Transfer Learning for Infrared-Visible Tasks },
  author={ Yaming Zhang and Chenqiang Gao and Fangcen Liu and Junjie Guo and Lan Wang and Xinggan Peng and Deyu Meng },
  journal={arXiv preprint arXiv:2412.16654},
  year={ 2025 }
}
Comments on this paper