English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Understanding Objects and Actions: a VR Experiment

MPS-Authors
/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84200

Schultze,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84088

Mohler,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84285

Volkova,  E
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83780

Alexandrova,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

JVRC-2010-Wallraven.pdf
(Any fulltext), 302KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Wallraven, C., Schultze, M., Mohler, B., Volkova, E., Alexandrova, I., Vatakis, A., et al. (2010). Understanding Objects and Actions: a VR Experiment. Poster presented at 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Stuttgart, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BE90-E
Abstract
The human capability to interpret actions and to recognize objects is still far ahead of that of any technical system.
Thus, a deeper understanding of how humans are able to interpret human (inter)actions lies at the core of building
better artificial cognitive systems. Here, we present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations
of everyday situations. The animations were created from a unique corpus of real-life recordings made in the
European project POETICON using motion-capture technology and advanced VR programming that allowed for full control over all aspects of the finally rendered data.