An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

Type of content
Conference Contributions - Published
Publisher's DOI/URI
Thesis discipline
Degree name
Publisher
University of Canterbury. Human Interface Technology Laboratory.
Journal Title
Journal ISSN
Volume Title
Language
Date
2006
Authors
Irawati, S.
Green, S.
Billinghurst, Mark
Duenser, A.
Ko, H.
Abstract

This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.

Description
Citation
Irawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H. (2006) An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures. Hangzhou, China: 16th International Conference on Artificial Reality and Telexistence (ICAT 2006), 29 Nov-2 Dec 2006. Lecture Notes in Computer Science (LNCS), 4282, Advances in Artificial Reality and Tele-Existence, 272-283.
Keywords
multimodal interaction, paddles gestures, augmented-reality, speech input, gesture input, evaluation
Ngā upoko tukutuku/Māori subject headings
ANZSRC fields of research
Rights