English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

PIE: Portrait Image Embedding for Semantic Control

MPS-Authors
/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons229949

Elgharib,  Mohamed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239545

Mallikarjun B R, 
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons214986

Bernard,  Florian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons136490

Zollhöfer,  Michael
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2009.09485.pdf
(Preprint), 12MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tewari, A., Elgharib, M., Mallikarjun B R, Bernard, F., Seidel, H.-P., Pérez, P., et al. (2020). PIE: Portrait Image Embedding for Semantic Control. Retrieved from https://arxiv.org/abs/2009.09485.


Cite as: https://hdl.handle.net/21.11116/0000-0007-B117-7
Abstract
Editing of portrait images is a very popular and important research topic
with a large variety of applications. For ease of use, control should be
provided via a semantically meaningful parameterization that is akin to
computer animation controls. The vast majority of existing techniques do not
provide such intuitive and fine-grained control, or only enable coarse editing
of a single isolated control parameter. Very recently, high-quality
semantically controlled editing has been demonstrated, however only on
synthetically created StyleGAN images. We present the first approach for
embedding real portrait images in the latent space of StyleGAN, which allows
for intuitive editing of the head pose, facial expression, and scene
illumination in the image. Semantic editing in parameter space is achieved
based on StyleRig, a pretrained neural network that maps the control space of a
3D morphable face model to the latent space of the GAN. We design a novel
hierarchical non-linear optimization problem to obtain the embedding. An
identity preservation energy term allows spatially coherent edits while
maintaining facial integrity. Our approach runs at interactive frame rates and
thus allows the user to explore the space of possible edits. We evaluate our
approach on a wide set of portrait photos, compare it to the current state of
the art, and validate the effectiveness of its components in an ablation study.