ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11427
  4. Cited By
NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors

NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors

15 April 2025
Yanrui Bin
Wenbo Hu
Haoyuan Wang
Xinya Chen
Bing Wang
    DiffM
ArXiv (abs)PDFHTML

Papers citing "NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors"

4 / 4 papers shown
Title
Generative Perception of Shape and Material from Differential Motion
Generative Perception of Shape and Material from Differential Motion
Xinran Nicole Han
Ko Nishino
T. Zickler
DiffMVGen
59
0
0
03 Jun 2025
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Sihyun Yu
Sangkyung Kwak
Huiwon Jang
Jongheon Jeong
Jonathan Huang
Jinwoo Shin
Saining Xie
OCL
205
102
0
09 Oct 2024
Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction
Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction
Jing He
Haodong Li
Wei Yin
Yixun Liang
Leheng Li
Kaiqiang Zhou
Hongbo Zhang
Bingbing Liu
Ying-Cong Chen
DiffMVLM
232
55
0
26 Sep 2024
Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
Gonzalo Martin Garcia
Karim Abou Zeid
Christian Schmidt
Daan de Geus
Alexander Hermans
Bastian Leibe
140
33
0
17 Sep 2024
1