Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Learning to use an invisible visual signal for perception

MPG-Autoren
/persons/resource/persons83885

Di Luca,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83906

Ernst,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Di Luca, M., Ernst, M., & Backus, B. (2010). Learning to use an invisible visual signal for perception. Current Biology, 20(20), 1860-1863. doi:10.1016/j.cub.2010.09.047.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-BE50-D
Zusammenfassung
How does the brain construct a percept from sensory signals? One approach to this fundamental question is to investigate perceptual learning as induced by exposure to
statistical regularities in sensory signals [1–7]. Recent studies showed that exposure to novel correlations between sensory signals can cause a signal to have new perceptual effects [2, 3]. In those studies, however, the signals were clearly visible. The automaticity of the learning was therefore difficult to determine. Here we investigate whether learning of this sort, which causes new effects on appearance, can be low level and automatic by employing a visual signal whose perceptual consequences were made invisible—a vertical disparity gradient masked by other depth cues. This approach excluded high-level influences such as attention or consciousness. Our stimulus for probing perceptual appearance was a rotating cylinder. During exposure, we introduced a new contingency between the invisible signal and the rotation direction of the cylinder. When subsequently presenting an ambiguously rotating version of the cylinder, we found that the invisible signal influenced the perceived rotation direction. This de
monstrates that perception can rapidly undergo ‘‘structure learning’’ by automatically picking up novel contingencies between sensory signals, thus automatically recruiting signals for novel uses during the construction of a percept.