ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04174
72
0

FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting

4 June 2025
Hengyu Liu
Yuehao Wang
Chenxin Li
Ruisi Cai
Kevin Wang
Wuyang Li
Pavlo Molchanov
Peihao Wang
Zhangyang Wang
    3DGS
ArXiv (abs)PDFHTML
Main:8 Pages
8 Figures
Bibliography:2 Pages
12 Tables
Appendix:5 Pages
Abstract

3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis due to its efficient rendering capabilities. However, 3DGS demands relatively significant GPU memory, limiting its use on devices with restricted computational resources. Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS but often requiring a fine-tuning stage and lacking adaptability for the specific memory needs of different devices. In this work, we present an elastic inference method for 3DGS. Given an input for the desired model size, our method selects and transforms a subset of Gaussians, achieving substantial rendering performance without additional fine-tuning. We introduce a tiny learnable module that controls Gaussian selection based on the input percentage, along with a transformation module that adjusts the selected Gaussians to complement the performance of the reduced model. Comprehensive experiments on ZipNeRF, MipNeRF and Tanks\&Temples scenes demonstrate the effectiveness of our approach. Code is available atthis https URL.

View on arXiv
@article{liu2025_2506.04174,
  title={ FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting },
  author={ Hengyu Liu and Yuehao Wang and Chenxin Li and Ruisi Cai and Kevin Wang and Wuyang Li and Pavlo Molchanov and Peihao Wang and Zhangyang Wang },
  journal={arXiv preprint arXiv:2506.04174},
  year={ 2025 }
}
Comments on this paper