ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21629
28
0

OFER: Occluded Face Expression Reconstruction

29 October 2024
Pratheba Selvaraju
Victoria Fernandez-Abrevaya
Timo Bolkart
Rick Akkerman
Tianyu Ding
F. Amjadi
Ilya Zharkov
    DiffM
    CVBM
    3DH
ArXivPDFHTML
Abstract

Reconstructing 3D face models from a single image is an inherently ill-posed problem, which becomes even more challenging in the presence of occlusions. In addition to fewer available observations, occlusions introduce an extra source of ambiguity where multiple reconstructions can be equally valid. Despite the ubiquity of the problem, very few methods address its multi-hypothesis nature. In this paper we introduce OFER, a novel approach for single-image 3D face reconstruction that can generate plausible, diverse, and expressive 3D faces, even under strong occlusions. Specifically, we train two diffusion models to generate the shape and expression coefficients of a face parametric model, conditioned on the input image. This approach captures the multi-modal nature of the problem, generating a distribution of solutions as output. However, to maintain consistency across diverse expressions, the challenge is to select the best matching shape. To achieve this, we propose a novel ranking mechanism that sorts the outputs of the shape diffusion network based on predicted shape accuracy scores. We evaluate our method using standard benchmarks and introduce CO-545, a new protocol and dataset designed to assess the accuracy of expressive faces under occlusion. Our results show improved performance over occlusion-based methods, while also enabling the generation of diverse expressions for a given image.

View on arXiv
@article{selvaraju2025_2410.21629,
  title={ OFER: Occluded Face Expression Reconstruction },
  author={ Pratheba Selvaraju and Victoria Fernandez Abrevaya and Timo Bolkart and Rick Akkerman and Tianyu Ding and Faezeh Amjadi and Ilya Zharkov },
  journal={arXiv preprint arXiv:2410.21629},
  year={ 2025 }
}
Comments on this paper