Synthetic vision-based perceptual attention for augmented reality agents

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 486
  • Download : 0
We describe our model for synthetic vision-based perceptual attention for autonomous agents in augmented reality (AR) environments. Since virtual and physical objects coexist in their environment, such agents must adaptively perceive and attend to objects relevant to their goals. To enable agents to perceive their surroundings, our approach allows the agents to determine currently visible objects from the scene description of what virtual and physical objects are configured in the camera's viewing area. In our model, a degree of attention is assigned to each perceived object based on its similarity to target objects related to an agent's goals. The agent can thus focus on a reduced set of perceived objects with respect to the estimated degree of attention. Moreover, by continuously and smartly updating the perceptual memory, it eliminates the processing loads associated to previously observed objects. To demonstrate the effectiveness of our approach, we implemented an animated character that was overlaid over a miniature version of campus in real-time and that attended to building blocks relevant to given tasks. Experiments showed that our model could reduce a character's perceptual load at any time, even when surroundings change. Copyright (C) 2010 John Wiley & Sons, Ltd.
Publisher
JOHN WILEY SONS LTD
Issue Date
2010-05
Language
English
Article Type
Article; Proceedings Paper
Citation

COMPUTER ANIMATION AND VIRTUAL WORLDS, v.21, no.3-4, pp.463 - 472

ISSN
1546-4261
DOI
10.1002/cav.368
URI
http://hdl.handle.net/10203/96183
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0