40
0

QuantFace: Low-Bit Post-Training Quantization for One-Step Diffusion Face Restoration

Main:9 Pages
7 Figures
Bibliography:4 Pages
4 Tables
Abstract

Diffusion models have been achieving remarkable performance in face restoration. However, the heavy computations of diffusion models make it difficult to deploy them on devices like smartphones. In this work, we propose QuantFace, a novel low-bit quantization for one-step diffusion face restoration models, where the full-precision (\ie, 32-bit) weights and activations are quantized to 4\sim6-bit. We first analyze the data distribution within activations and find that they are highly variant. To preserve the original data information, we employ rotation-scaling channel balancing. Furthermore, we propose Quantization-Distillation Low-Rank Adaptation (QD-LoRA) that jointly optimizes for quantization and distillation performance. Finally, we propose an adaptive bit-width allocation strategy. We formulate such a strategy as an integer programming problem, which combines quantization error and perceptual metrics to find a satisfactory resource allocation. Extensive experiments on the synthetic and real-world datasets demonstrate the effectiveness of QuantFace under 6-bit and 4-bit. QuantFace achieves significant advantages over recent leading low-bit quantization methods for face restoration. The code is available atthis https URL.

View on arXiv
@article{li2025_2506.00820,
  title={ QuantFace: Low-Bit Post-Training Quantization for One-Step Diffusion Face Restoration },
  author={ Jiatong Li and Libo Zhu and Haotong Qin and Jingkai Wang and Linghe Kong and Guihai Chen and Yulun Zhang and Xiaokang Yang },
  journal={arXiv preprint arXiv:2506.00820},
  year={ 2025 }
}
Comments on this paper