ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.04268
18
1

MS-NeRF: Multi-Space Neural Radiance Fields

7 May 2023
Ze-Xin Yin
Jiaxiong Qiu
Jiaxiong Qiu
Bo Ren
Bo Ren
    AI4CE
ArXivPDFHTML
Abstract

Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects, often resulting in blurry or distorted rendering. Instead of calculating a single radiance field, we propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces, which leads to a better understanding of the neural network toward the existence of reflective and refractive objects. Our multi-space scheme works as an enhancement to existing NeRF methods, with only small computational overheads needed for training and inferring the extra-space outputs. We design different multi-space modules for representative MLP-based and grid-based NeRF methods, which improve Mip-NeRF 360 by 4.15 dB in PSNR with 0.5% extra parameters and further improve TensoRF by 2.71 dB with 0.046% extra parameters on reflective regions without degrading the rendering quality on other regions. We further construct a novel dataset consisting of 33 synthetic scenes and 7 real captured scenes with complex reflection and refraction, where we design complex camera paths to fully benchmark the robustness of NeRF-based methods. Extensive experiments show that our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes concerned with complex light paths through mirror-like objects. The source code, dataset, and results are available via our project page:this https URL.

View on arXiv
@article{yin2025_2305.04268,
  title={ MS-NeRF: Multi-Space Neural Radiance Fields },
  author={ Ze-Xin Yin and Peng-Yi Jiao and Jiaxiong Qiu and Ming-Ming Cheng and Bo Ren },
  journal={arXiv preprint arXiv:2305.04268},
  year={ 2025 }
}
Comments on this paper