VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction

CVPR 2023

* Equal Contribution

1EPFL   2ETH Zurich   3Microsoft Mixed Reality & AI Zurich Lab  

Abstract

The success of the Neural Radiance Fields (NeRF) in novel view synthesis has inspired researchers to propose neural implicit scene reconstruction. However, most existing neural implicit reconstruction methods optimize per-scene parameters and therefore lack generalizability to new scenes. We introduce VolRecon, a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct the scene with fine details and little noise, VolRecon combines projection features aggregated from multi-view features, and volume features interpolated from a coarse global feature volume. Using a ray transformer, we compute SRDF values of sampled points on a ray and then render color and depth. On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction. Furthermore, our approach exhibits good generalization performance on the large-scale ETH3D benchmark.

Reconstruction Results

More results

Source views

Reconstructed 3D model

Roll: degrees
Pitch: degrees
Yaw: degrees
Scale: X: , Y: , Z:

Poster

BibTeX

@article{ren2022volrecon,
  author    = {Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys, Sabine Süsstrunk},
  title     = {VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction},
  journal   = {arXiv},
  year      = {2022},
}