UniSDF

Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections

Fangjinhua Wang1,2 Marie-Julie Rakotosaona2 Michael Niemeyer2 Richard Szeliski2
Marc Pollefeys1 Federico Tombari2
1ETH Zürich 2Google

UniSDF enables high-quality 3D reconstruction of complex scenes with reflections

Abstract

Neural 3D scene representations have shown great potential for 3D reconstruction from 2D images. However, reconstructing real-world captures of complex scenes still remains a challenge. Existing generic 3D reconstruction methods often struggle to represent fine geometric details and do not adequately model reflective surfaces of large-scale scenes. Techniques that explicitly focus on reflective surfaces can model complex and detailed reflections by exploiting better reflection parameterizations. However, we observe that these methods are often not robust in real unbounded scenarios where non-reflective as well as reflective components are present. In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections. We investigate both view-based as well as reflection-based color prediction parameterization techniques and find that explicitly blending these representations in 3D space enables reconstruction of surfaces that are more geometrically accurate, especially for reflective surfaces. We further combine this representation with a multi-resolution grid backbone that is trained in a coarse-to-fine manner, enabling faster reconstructions than prior methods. Extensive experiments on object-level datasets DTU, Shiny Blender as well as unbounded datasets Mip-NeRF 360 and Ref-NeRF real demonstrate that our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.

TL;DR: We unify different radiance field parameterizations and combine it with a multi-resolution grid backbone to achieve high-quality reconstruction of complex scenes with reflections.

Baseline Comparison

BakedSDF
RefNeRF
Neuralangelo
Ours

Additional Results of our Method on RefNeRF Real Dataset

Additional Results of our Method on Mip-NeRF360 Dataset





Citation

If you want to cite our work, please use:

      @InProceedings{wang2023unisdf,
        author    = {Fangjinhua Wang and Marie-Julie Rakotosaona and Michael Niemeyer and Richard Szeliski and Marc Pollefeys and Federico Tombari},  
        title     = {UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections},
        booktitle = {arXiv},
        year      = {2023},
      }