107
2

IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models

Main:8 Pages
19 Figures
Bibliography:2 Pages
6 Tables
Appendix:12 Pages
Abstract

Fine-tuning large-scale text-to-image diffusion models for various downstream tasks has yielded impressive results. However, the heavy computational burdens of tuning large models prevent personal customization. Recent advances have attempted to employ parameter-efficient fine-tuning (PEFT) techniques to adapt the floating-point (FP) or quantized pre-trained weights. Nonetheless, the adaptation parameters in existing works are still restricted to FP arithmetic, hindering hardware-friendly acceleration. In this work, we propose IntLoRA, to further push the efficiency limits by using integer type (INT) low-rank parameters to adapt the quantized diffusion models. By working in the integer arithmetic, our IntLoRA offers three key advantages: (i) for fine-tuning, the pre-trained weights are quantized, reducing memory usage; (ii) for storage, both pre-trained and low-rank weights are in INT which consumes less disk space; (iii) for inference, IntLoRA weights can be naturally merged into quantized pre-trained weights through efficient integer multiplication or bit-shifting, eliminating additional post-training quantization. Extensive experiments demonstrate that IntLoRA can achieve performance on par with or even superior to the vanilla LoRA, accompanied by significant efficiency improvements. Code is available at \url{https://github.com/csguoh/IntLoRA}.

View on arXiv
@article{guo2025_2410.21759,
  title={ IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models },
  author={ Hang Guo and Yawei Li and Tao Dai and Shu-Tao Xia and Luca Benini },
  journal={arXiv preprint arXiv:2410.21759},
  year={ 2025 }
}
Comments on this paper