English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence

MPS-Authors
/persons/resource/persons84252

Teramoto,  W
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Teramoto, W., & Riecke, B. (2007). Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CD07-D
Abstract
It is well known that people have difficulties in recognizing an object from novel views as
compared to learned views, resulting in increased response times and/or errors. This so-called
view-dependency has been confirmed by many studies. In the natural environment, however,
there are two ways of changing views of an object: one is to rotate an object in front of a
stationary observer (object-movement), the other is for the observer to move around a stationary
object (observer-movement). Simons et al. [1] criticized previous studies in this regard
and examined the difference between object- and observer-movement directly. As a result,
Simons et al. reported the elimination of this view-dependency when novel views resulted
from observer-movement, instead of object-movement. They suggest the contribution of extraretinal
(vestibular and proprioceptive) information to object recognition. Recently, however,
Zhao et al. [2] reported that the observer’s movement from one view to another only decreased
view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations
of 90 instead of 50. The aim of the present study was to confirm the phenomenon
in our virtual reality environment and to clarify the underlying mechanism further by using
larger angles of view change (45-180, in 45 steps). Two experiments were conducted using
an eMagin Z800 3D Visor head-mounted display that was tracked by 16 Vicon MX 13 motion
capture cameras. Observers performed sequential-matching tasks. Five novel objects and
five mirror-reversed versions of these objects were created by smoothing the edges of Shepard-
Metzler’s objects. A mirror-reflected version of the learned object was used as a distractor in
Experiment 1 (N=13), whereas one of the other (i.e., not mirror-reversed) objects was randomly
selected on each trial as a distractor in Experiment 2 (N=15). Test views of the objects were
manipulated either by viewer or object movement. Both experiments showed a significant overall
advantage of viewer movements over object movements. Note, however, that performance
was still viewpoint-dependent. These results suggest an involvement of partially advantageous
and cost-effective transformation mechanisms, but not a complete automatic spatial-updating
mechanism as proposed by Simons et al. [1], when observers move.