73
0

TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features

Main:8 Pages
12 Figures
Bibliography:3 Pages
4 Tables
Appendix:2 Pages
Abstract

As 3D content creation continues to grow, transferring semantic textures between 3D meshes remains a significant challenge in computer graphics. While recent methods leverage text-to-image diffusion models for texturing, they often struggle to preserve the appearance of the source texture during texture transfer. We present \ourmethod, a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface colors. Using an efficient triplane-based architecture, our method enables semantic-aware texture transfer to a novel target mesh. Despite training on just one example, it generalizes effectively to diverse shapes within the same category. Extensive evaluation on our newly created benchmark dataset shows that \ourmethod{} achieves superior texture transfer quality and fast inference times compared to existing methods. Our approach advances single-example texture transfer, providing a practical solution for maintaining visual coherence across related 3D models in applications like game development and simulation.

View on arXiv
@article{cohen-bar2025_2503.16630,
  title={ TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features },
  author={ Dana Cohen-Bar and Daniel Cohen-Or and Gal Chechik and Yoni Kasten },
  journal={arXiv preprint arXiv:2503.16630},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.