34
0

LSNIF: Locally-Subdivided Neural Intersection Function

Abstract

Neural representations have shown the potential to accelerate ray casting in a conventional ray-tracing-based rendering pipeline. We introduce a novel approach called Locally-Subdivided Neural Intersection Function (LSNIF) that replaces bottom-level BVHs used as traditional geometric representations with a neural network. Our method introduces a sparse hash grid encoding scheme incorporating geometry voxelization, a scene-agnostic training data collection, and a tailored loss function. It enables the network to output not only visibility but also hit-point information and material indices. LSNIF can be trained offline for a single object, allowing us to use LSNIF as a replacement for its corresponding BVH. With these designs, the network can handle hit-point queries from any arbitrary viewpoint, supporting all types of rays in the rendering pipeline. We demonstrate that LSNIF can render a variety of scenes, including real-world scenes designed for other path tracers, while achieving a memory footprint reduction of up to 106.2x compared to a compressed BVH.

View on arXiv
@article{fujieda2025_2504.21627,
  title={ LSNIF: Locally-Subdivided Neural Intersection Function },
  author={ Shin Fujieda and Chih-Chen Kao and Takahiro Harada },
  journal={arXiv preprint arXiv:2504.21627},
  year={ 2025 }
}
Comments on this paper