日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Generalization to Novel Views from a Single Face Image

MPS-Authors
/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83815

Blanz,  V
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Vetter, T., & Blanz, V. (1998). Generalization to Novel Views from a Single Face Image. In J., Wechsler, J., Phillips, V., Bruce, F., Fogelman Soulié, & T., Huang (Eds.), Face Recognition: From Theory to Applications (pp. 310-326). Berlin, Germany: Springer.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-E9BC-8
要旨
When only a single image of a face is available, can we generate new images of the face across changes in viewpoint or illumination? The approach presented in this paper acquires its knowledge about possible image changes from other faces and transfers this prior knowledge to a novel face image. In previous work we introduced the concept of linear object classes (Vetter and Poggio, 1997; Vetter, 1997): In an image based approach, a flexible image model of faces was used to synthesize new images of a face when only a single 2D image of that face is available.

In this paper we describe a new general flexible face model which is now “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop the flexible 3D model is matched to the novel face image. Variation of the model parameters, similar to multidimensional morphing, allows for generating new images of the face where viewpoint, illumination or even the expression is changed.

The key problem for generating a flexible face model is the computation of dense correspondence between all given example faces. A new correspondence algorithm is described, which is a generalization of existing algorithms for optic flow computation to 3D-face data.