Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Recognizing Faces in Motion

MPG-Autoren
/persons/resource/persons84141

Pilz,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Pilz, K., Thornton, I., Vuong, Q., & Bülthoff, H. (2004). Recognizing Faces in Motion. Poster presented at 27th European Conference on Visual Perception, Budapest, Hungary.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D861-A
Zusammenfassung
In addition to the nonrigid and rigid motions of the head, both of which have been shown to facilitate face recognition, another familiar type of movement occurs whenever a person approaches you. We investigated whether this kind of looming motion has any effect on recognition performance. We used 12 different male head models from the MPI face database and placed them on the same 3-D body model. These figures were animated to approach the observer. Subjects were familiarised either with the last frame out of the rendered sequence, which basically showed the head and shoulders of the person, or the entire video sequence. To facilitate learning, observers filled out a questionnaire relating to the facial features and the personality of each individual, a facilitation phase that lasted approximately 1 h. After a brief intervening task, observers were shown 12 pairs of static faces, one old and one new in each pair, and asked to identify the familiar individual. These test faces were rendered from a novel viewpoint to increase the demands of the recognition task. Our initial results indicate a robust advantage for dynamic familiarisation.