Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Perceptual validation of facial animation: The role of prior experience

MPG-Autoren
/persons/resource/persons84115

Nusseck,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Nusseck, M., Cunningham, D., Wallraven, C., König, C., & Bülthoff, H. (2005). Perceptual validation of facial animation: The role of prior experience. Poster presented at 28th European Conference on Visual Perception (ECVP 2005), A Coruña, Spain.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D4DF-5
Zusammenfassung
Facial expressions play a complex and important role in communication. A complete investigation of how facial expressions are recognised requires that different expressions be systematically and subtly manipulated. For this purpose, we recently developed a photo-realistic facial animation system that uses a combination of facial motion capture and high-resolution 3-D face scans. In order to determine if the synthetic expressions capture the subtlety of natural facial expressions, we directly compared recognition performance for video sequences of real-world and animated facial expressions (the sequences will be available on our website). Moreover, just as recognition of an incomplete or degraded object can be improved through prior experience with a complete, undistorted version of that object, it is possible that experience with the real-world video sequences may improve recognition of the synthesised expressions. Therefore, we explicitly investigated the effects of presentation order. More specifically, half of the participants saw all of the video sequences followed by the animation sequences, while the other half experienced the opposite order. Recognition of five expressions (agreement, disagreement, confusion, happiness, thinking) was measured with a six-alternative, non-forced-choice task. Overall, recognition performance was significantly higher ( p < 0.0001) for the video sequences (93) than for the animations (73). A closer look at the data showed that this difference is largely based on a single expression: confusion. As expected, there was an order effect for the animations ( p < 0.02): seeing the video sequences improved recognition performance for the animations. Finally, there was no order effect for the real videos ( p > 0.14). In conclusion, the synthesised expressions supported recognition performance similarly to real expressions and have proven to be a valuable tool in understanding the perception of facial expressions.