日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

MPS-Authors
/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1912.04302.pdf
(プレプリント), 4MB

付随資料 (公開)
There is no public supplementary material available
引用

Božič, A., Zollhöfer, M., Theobalt, C., & Nießner, M. (2019). DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data. Retrieved from http://arxiv.org/abs/1912.04302.


引用: https://hdl.handle.net/21.11116/0000-0005-7DDE-6
要旨
Applying data-driven approaches to non-rigid 3D reconstruction has been
difficult, which we believe can be attributed to the lack of a large-scale
training corpus. One recent approach proposes self-supervision based on
non-rigid reconstruction. Unfortunately, this method fails for important cases
such as highly non-rigid deformations. We first address this problem of lack of
data by introducing a novel semi-supervised strategy to obtain dense
inter-frame correspondences from a sparse set of annotations. This way, we
obtain a large dataset of 400 scenes, over 390,000 RGB-D frames, and 2,537
densely aligned frame pairs; in addition, we provide a test set along with
several metrics for evaluation. Based on this corpus, we introduce a
data-driven non-rigid feature matching approach, which we integrate into an
optimization-based reconstruction pipeline. Here, we propose a new neural
network that operates on RGB-D frames, while maintaining robustness under large
non-rigid deformations and producing accurate predictions. Our approach
significantly outperforms both existing non-rigid reconstruction methods that
do not use learned data terms, as well as learning-based approaches that only
use self-supervision.