English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning to express "left-right" & "front-behind" in a sign versus spoken language

MPS-Authors
/persons/resource/persons32612

Sumer,  Beyza
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;

/persons/resource/persons220

Zwitserlood,  Inge
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Sumer_etal_2014_CogSci.pdf
(Publisher version), 399KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0019-88D2-1
Abstract
Developmental studies show that it takes longer for
children learning spoken languages to acquire viewpointdependent
spatial relations (e.g., left-right, front-behind),
compared to ones that are not viewpoint-dependent (e.g.,
in, on, under). The current study investigates how
children learn to express viewpoint-dependent relations
in a sign language where depicted spatial relations can be
communicated in an analogue manner in the space in
front of the body or by using body-anchored signs (e.g.,
tapping the right and left hand/arm to mean left and
right). Our results indicate that the visual-spatial
modality might have a facilitating effect on learning to
express these spatial relations (especially in encoding of
left-right) in a sign language (i.e., Turkish Sign
Language) compared to a spoken language (i.e.,
Turkish).