English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video

MPS-Authors
/persons/resource/persons226650

Tretschk,  Edgar
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2012.12247.pdf
(Preprint), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., & Theobalt, C. (2020). Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video. Retrieved from https://arxiv.org/abs/2012.12247.


Cite as: https://hdl.handle.net/21.11116/0000-0007-EA00-1
Abstract
In this tech report, we present the current state of our ongoing work on
reconstructing Neural Radiance Fields (NERF) of general non-rigid scenes via
ray bending. Non-rigid NeRF (NR-NeRF) takes RGB images of a deforming object
(e.g., from a monocular video) as input and then learns a geometry and
appearance representation that not only allows to reconstruct the input
sequence but also to re-render any time step into novel camera views with high
fidelity. In particular, we show that a consumer-grade camera is sufficient to
synthesize convincing bullet-time videos of short and simple scenes. In
addition, the resulting representation enables correspondence estimation across
views and time, and provides rigidity scores for each point in the scene. We
urge the reader to watch the supplemental videos for qualitative results. We
will release our code.