45
0

HonestFace: Towards Honest Face Restoration with One-Step Diffusion Model

Main:9 Pages
7 Figures
Bibliography:4 Pages
3 Tables
Abstract

Face restoration has achieved remarkable advancements through the years of development. However, ensuring that restored facial images exhibit high fidelity, preserve authentic features, and avoid introducing artifacts or biases remains a significant challenge. This highlights the need for models that are more "honest" in their reconstruction from low-quality inputs, accurately reflecting original characteristics. In this work, we propose HonestFace, a novel approach designed to restore faces with a strong emphasis on such honesty, particularly concerning identity consistency and texture realism. To achieve this, HonestFace incorporates several key components. First, we propose an identity embedder to effectively capture and preserve crucial identity features from both the low-quality input and multiple reference faces. Second, a masked face alignment method is presented to enhance fine-grained details and textural authenticity, thereby preventing the generation of patterned or overly synthetic textures and improving overall clarity. Furthermore, we present a new landmark-based evaluation metric. Based on affine transformation principles, this metric improves the accuracy compared to conventional L2 distance calculations for facial feature alignment. Leveraging these contributions within a one-step diffusion model framework, HonestFace delivers exceptional restoration results in terms of facial fidelity and realism. Extensive experiments demonstrate that our approach surpasses existing state-of-the-art methods, achieving superior performance in both visual quality and quantitative assessments. The code and pre-trained models will be made publicly available atthis https URL.

View on arXiv
@article{wang2025_2505.18469,
  title={ HonestFace: Towards Honest Face Restoration with One-Step Diffusion Model },
  author={ Jingkai Wang and Wu Miao and Jue Gong and Zheng Chen and Xing Liu and Hong Gu and Yutong Liu and Yulun Zhang },
  journal={arXiv preprint arXiv:2505.18469},
  year={ 2025 }
}
Comments on this paper