ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.15045
51
7

AIM 2024 Sparse Neural Rendering Challenge: Methods and Results

23 September 2024
Michal Nazarczuk
Sibi Catley-Chandar
T. Tanay
Richard Shaw
Eduardo Pérez-Pellitero
Radu Timofte
Xinyu Yan
Pan Wang
Y. Guo
Yongxin Wu
Y. Cai
Yanan Yang
Junting Li
Yanghong Zhou
P. Y. Mok
Zongqi He
Zhe Xiao
Kin-Chung Chan
Hana Lebeta Goshu
Cuixin Yang
Rongkang Dong
Jun Xiao
Kin-Man Lam
Jiayao Hao
Qiong Gao
Yanyan Zu
Junpei Zhang
Licheng Jiao
Xu Liu
Kuldeep Purohit
ArXivPDFHTML
Abstract

This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their respective results. The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations. It is composed of two tracks, with differing levels of sparsity; 3 views in Track 1 (very sparse) and 9 views in Track 2 (sparse). Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric. For both tracks, we use the newly introduced Sparse Rendering (SpaRe) dataset and the popular DTU MVS dataset. In this challenge, 5 teams submitted final results to Track 1 and 4 teams submitted final results to Track 2. The submitted models are varied and push the boundaries of the current state-of-the-art in sparse neural rendering. A detailed description of all models developed in the challenge is provided in this paper.

View on arXiv
Comments on this paper