Advanced SearchSearch Tips
A Development of Gesture Interfaces using Spatial Context Information
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
A Development of Gesture Interfaces using Spatial Context Information
Kwon, Doo-Young; Bae, Ki-Tae;
  PDF(new window)
Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.
Spatial Context;Gesture Interface;Gesture Recognition;
 Cited by
Stephen Brewster, Joanna Lumsden, Marek Bell, Malcolm Hall, and Stuart Tasker. (2003). Multimodal 'eyes-free' interaction techniques for wearable devices. In CHI '03': Proceedings of the SIGGCHI conference on Human factors in computing systems, pages 473-480, New York, NY, USA, ACM Press. crossref(new window)

Richard A. Bolt. (1980).Put-that-there: Voice and gesture at the graphics interface. In SIGGRAPH ’80: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pages 262–270, New York, NY, USA, ACM Press.

Diane Cook and Sajal Das. (2004). Smart environments: Technology, protocols and applications.

Jeremy R. Cooperstock, Sidney S. Fels, William Buxton, and Kenneth C. Smith. (1997). Reactive environments. Commun. ACM, 40(9): 65-73

Guanling Chen and David Kotz. (2000).A survey of context-aware mobile computing research. Technical report, Hanover, NH, USA.

Anind K. Dey, Raffay Hamid, Chris Beckmann, Ian Li, and Daniel Hsu. (2004.) a cappella: programming by demonstration of context-aware applications. In CHI ’04: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 33–40, New York, NY, USA, ACM Press.

C. Hummels and P. J. Stappers. (1998). Meaningful gestures for human computer interaction: Beyond hand postures. In FG '98: Proceedings of the 3rd. International Conference on Face & Gesture Recognition, page 591,Washington, DC, USA, IEEE Computer Society. crossref(new window)

David Heckerman, Dan Geiger, and David Maxwell Chickering. (1994).Learning bayesian networks: The combination of knowledge and statistical data. In KDD Workshop, pages 85–96

Wan-rong Jih, Jane Yung-jen Hsu Hsu, and Tse-Ming Tsai. (2006). Context-aware service integration for elderly care in a smart environment. In Modeling and Retrieval of Context Retrieval of Context: Papers from the AAAI Workshop, number WS-06-12, pages 44– 48, Boston, Massachusetts, USA, July 16 –20.

Doo Young Kwon, Markus Gross, A Framework for 3D Spatial Gesture Design and Modeling Using a Wearable Input Device, Proceedings of the 11th IEEE International Symposium on Wearable Computers, 2007, pp. 95-101

A. Wilson and S. Shafer. ( 2003). Xwand: Ui for intelligent spaces. In Proceedings of ACM CHI Conference on Human Factors in Computing Systems, pages 545–522.