English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images

MPS-Authors
/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons229949

Elgharib,  Mohamed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons214986

Bernard,  Florian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2004.00121.pdf
(Preprint), 5MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tewari, A., Elgharib, M., Bharaj, G., Bernard, F., Seidel, H.-P., Pérez, P., et al. (2020). StyleRig: Rigging StyleGAN for 3D Control over Portrait Images. Retrieved from https://arxiv.org/abs/2004.00121.


Cite as: https://hdl.handle.net/21.11116/0000-0007-B0FC-6
Abstract
StyleGAN generates photorealistic portrait images of faces with eyes, teeth,
hair and context (neck, shoulders, background), but lacks a rig-like control
over semantic face parameters that are interpretable in 3D, such as face pose,
expressions, and scene illumination. Three-dimensional morphable face models
(3DMMs) on the other hand offer control over the semantic parameters, but lack
photorealism when rendered and only model the face interior, not other parts of
a portrait image (hair, mouth interior, background). We present the first
method to provide a face rig-like control over a pretrained and fixed StyleGAN
via a 3DMM. A new rigging network, RigNet is trained between the 3DMM's
semantic parameters and StyleGAN's input. The network is trained in a
self-supervised manner, without the need for manual annotations. At test time,
our method generates portrait images with the photorealism of StyleGAN and
provides explicit control over the 3D semantic parameters of the face.