ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19739
36
1

LUCAS: Layered Universal Codec Avatars

27 February 2025
Di Liu
Teng Deng
Giljoo Nam
Yu Rong
Stanislav Pidhorskyi
Junxuan Li
Jason M. Saragih
Dimitris N. Metaxas
Chen Cao
    3DGS
ArXivPDFHTML
Abstract

Photorealistic 3D head avatar reconstruction faces critical challenges in modeling dynamic face-hair interactions and achieving cross-identity generalization, particularly during expressions and head movements. We present LUCAS, a novel Universal Prior Model (UPM) for codec avatar modeling that disentangles face and hair through a layered representation. Unlike previous UPMs that treat hair as an integral part of the head, our approach separates the modeling of the hairless head and hair into distinct branches. LUCAS is the first to introduce a mesh-based UPM, facilitating real-time rendering on devices. Our layered representation also improves the anchor geometry for precise and visually appealing Gaussian renderings. Experimental results indicate that LUCAS outperforms existing single-mesh and Gaussian-based avatar models in both quantitative and qualitative assessments, including evaluations on held-out subjects in zero-shot driving scenarios. LUCAS demonstrates superior dynamic performance in managing head pose changes, expression transfer, and hairstyle variations, thereby advancing the state-of-the-art in 3D head avatar reconstruction.

View on arXiv
@article{liu2025_2502.19739,
  title={ LUCAS: Layered Universal Codec Avatars },
  author={ Di Liu and Teng Deng and Giljoo Nam and Yu Rong and Stanislav Pidhorskyi and Junxuan Li and Jason Saragih and Dimitris N. Metaxas and Chen Cao },
  journal={arXiv preprint arXiv:2502.19739},
  year={ 2025 }
}
Comments on this paper