ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17338
58
0

Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering

22 May 2025
Zhongpai Gao
Meng Zheng
Benjamin Planche
Anwesa Choudhuri
Terrence Chen
Ziyan Wu
    MedIm
    3DGS
    VGen
ArXivPDFHTML
Abstract

Volumetric rendering of Computed Tomography (CT) scans is crucial for visualizing complex 3D anatomical structures in medical imaging. Current high-fidelity approaches, especially neural rendering techniques, require time-consuming per-scene optimization, limiting clinical applicability due to computational demands and poor generalizability. We propose Render-FM, a novel foundation model for direct, real-time volumetric rendering of CT scans. Render-FM employs an encoder-decoder architecture that directly regresses 6D Gaussian Splatting (6DGS) parameters from CT volumes, eliminating per-scan optimization through large-scale pre-training on diverse medical data. By integrating robust feature extraction with the expressive power of 6DGS, our approach efficiently generates high-quality, real-time interactive 3D visualizations across diverse clinical CT data. Experiments demonstrate that Render-FM achieves visual fidelity comparable or superior to specialized per-scan methods while drastically reducing preparation time from nearly an hour to seconds for a single inference step. This advancement enables seamless integration into real-time surgical planning and diagnostic workflows. The project page is:this https URL.

View on arXiv
@article{gao2025_2505.17338,
  title={ Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering },
  author={ Zhongpai Gao and Meng Zheng and Benjamin Planche and Anwesa Choudhuri and Terrence Chen and Ziyan Wu },
  journal={arXiv preprint arXiv:2505.17338},
  year={ 2025 }
}
Comments on this paper