English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization

MPS-Authors
/persons/resource/persons248288

Qian,  Neng
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons244018

Wang,  Jiayi
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons134216

Mueller,  Franziska
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons214986

Bernard,  Florian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

MPI-I-2020-4-001.pdf
(Any fulltext), 5MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Qian, N., Wang, J., Mueller, F., Bernard, F., Golyanik, V., & Theobalt, C.(2020). Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization (MPI-I-2020-4-001). Saarbrücken: Max-Planck-Institut für Informatik.


Cite as: https://hdl.handle.net/21.11116/0000-0006-9128-9
Abstract
3D hand reconstruction from image data is a widely-studied problem in com-
puter vision and graphics, and has a particularly high relevance for virtual
and augmented reality. Although several 3D hand reconstruction approaches
leverage hand models as a strong prior to resolve ambiguities and achieve a
more robust reconstruction, most existing models account only for the hand
shape and poses and do not model the texture. To fill this gap, in this work
we present the first parametric texture model of human hands. Our model
spans several dimensions of hand appearance variability (e.g., related to gen-
der, ethnicity, or age) and only requires a commodity camera for data acqui-
sition. Experimentally, we demonstrate that our appearance model can be
used to tackle a range of challenging problems such as 3D hand reconstruc-
tion from a single monocular image. Furthermore, our appearance model
can be used to define a neural rendering layer that enables training with a
self-supervised photometric loss. We make our model publicly available.