Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Active and passive exploration of faces

MPG-Autoren
/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84088

Mohler,  B
Research Group Space and Body Perception, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bülthoff, I., Mohler, B., & Thornton, I. (2015). Active and passive exploration of faces. Poster presented at 38th European Conference on Visual Perception (ECVP 2015), Liverpool, UK.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002A-4502-D
Zusammenfassung
In most face recognition studies, learned faces are shown without a visible body to passive participants. Here, faces were attached to a body and participants were either actively or passively viewing them before their recognition performance was tested. 3D-laser scans of real faces were integrated onto sitting or standing full-bodied avatars placed in a virtual room. In the ‘active’ learning condition, participants viewed the virtual environment through a head-mounted display. Their head position was tracked to allow them to walk physically from one avatar to the next and to move their heads to look up or down to the standing or sitting avatars. In the ‘passive dynamic’ condition, participants saw a rendering of the visual explorations of the first group. In the ‘passive static’ condition, participants saw static screenshots of the upper bodies in the room. Face orientation congruency (up versus down) was manipulated at test. Faces were recognized more accurately when viewed in a familiar orientation for all learning conditions. While active viewing in general improved performance as compared to viewing static faces, passive observers and active observers - who received the same visual information - performed similarly, despite the absence of volitional movements for the passive dynamic observers.