Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of the Ergonomics Society of Korea
Journal Basic Information
Journal DOI :
The Ergonomics Society of Korea
Editor in Chief :
Volume & Issues
Volume 31, Issue 6 - Dec 2012
Volume 31, Issue 5 - Oct 2012
Volume 31, Issue 4 - Aug 2012
Volume 31, Issue 3 - Jun 2012
Volume 31, Issue 2 - Apr 2012
Volume 31, Issue 1 - Feb 2012
Selecting the target year
Interacting with Touchless Gestures: Taxonomy and Requirements
Kim, Huhn ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 475~481
DOI : 10.5143/JESK.2012.31.4.475
Objective: The aim of this study is to make the taxonomy for classifying diverse touchless gestures and establish the design requirements that should be considered in determining suitable gestures during gesture-based interaction design. Background: Recently, the applicability of touchless gestures is more and more increasing as relevant technologies are being advanced. However, before touchless gestures are widely applied to various devices or systems, the understanding on human gestures' natures and their standardization should be prerequisite. Method: In this study, diverse gesture types in various literatures were collected and, based on those, a new taxonomy for classifying touchless gestures was proposed. And many gesture-based interaction design cases and studies were analyzed. Results: The proposed taxonomy consisted of two dimensions: shape (deictic, manipulative, semantic, or descriptive) and motion(static or dynamic). The case analysis based on the taxonomy showed that manipulative and dynamic gestures were widely applied. Conclusion: Four core requirements for valuable touchless gestures were intuitiveness, learnability, convenience and discriminability. Application: The gesture taxonomy can be applied to produce alternatives of applicable touchless gestures, and four design requirements can be used as the criteria for evaluating the alternatives.
Three Dimensional Hand Gesture Taxonomy for Commands
Choi, Eun-Jung ; Lee, Dong-Hun ; Chung, Min-K. ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 483~492
DOI : 10.5143/JESK.2012.31.4.483
Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.
A Study on Structuring and Classification of Input Interaction
Pan, Young-Hwan ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 493~498
DOI : 10.5143/JESK.2012.31.4.493
Objective: The purpose of this study is to suggest the hierarchical structure with three layers of input task, input interaction, and input device. Background: Understanding the input interaction is very helpful to design an interface design. Method: We made a model of three layered input structure based on empirical approach and applied to a gesture interaction in TV. Result: We categorized the input tasks into six elementary tasks which are select, position, orient, text, and quantify. The five interactions described in this paper could accomplish the full range of input interaction, although the criteria for classification were not consistent. We analyzed the Microsoft kinect with this structure. Conclusion: The input interactions of command, 4 way, cursor, touch, and intelligence are basic interaction structure to understanding input system. Application: It is expected the model can be used to design a new input interaction and user interface.
A Study on Developmental Direction of Interface Design for Gesture Recognition Technology
Lee, Dong-Min ; Lee, Jeong-Ju ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 499~505
DOI : 10.5143/JESK.2012.31.4.499
Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.
Conditions of Applications, Situations and Functions Applicable to Gesture Interface
Ryu, Tae-Beum ; Lee, Jae-Hong ; Song, Joo-Bong ; Yun, Myung-Hwan ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 507~513
DOI : 10.5143/JESK.2012.31.4.507
Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.
Towards Establishing a Touchless Gesture Dictionary based on User Participatory Design
Song, Hae-Won ; Kim, Huhn ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 515~523
DOI : 10.5143/JESK.2012.31.4.515
Objective: The aim of this study is to investigate users' intuitive stereotypes on non-touch gestures and establish the gesture dictionary that can be applied to gesture-based interaction designs. Background: Recently, the interaction based on non-touch gestures is emerging as an alternative for natural interactions between human and systems. However, in order for non-touch gestures to become a universe interaction method, the studies on what kinds of gestures are intuitive and effective should be prerequisite. Method: In this study, as applicable domains of non-touch gestures, four devices(i.e. TV, Audio, Computer, Car Navigation) and sixteen basic operations(i.e. power on/off, previous/next page, volume up/down, list up/down, zoom in/out, play, cancel, delete, search, mute, save) were drawn from both focus group interview and survey. Then, a user participatory design was performed. The participants were requested to design three gestures suitable to each operation in the devices, and they evaluated intuitiveness, memorability, convenience, and satisfaction of their derived gestures. Through the participatory design, agreement scores, frequencies and planning times of each distinguished gesture were measured. Results: The derived gestures were not different in terms of four devices. However, diverse but common gestures were derived in terms of kinds of operations. In special, manipulative gestures were suitable for all kinds of operations. On the contrary, semantic or descriptive gestures were proper to one-shot operations like power on/off, play, cancel or search. Conclusion: The touchless gesture dictionary was established by mapping intuitive and valuable gestures onto each operation. Application: The dictionary can be applied to interaction designs based on non-touch gestures. Moreover, it will be used as a basic reference for standardizing non-touch gestures.
The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV
Jo, Chun-Ik ; Lim, Ji-Hyoun ; Park, Jun ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 525~531
DOI : 10.5143/JESK.2012.31.4.525
Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.
A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device
Lee, Hyun-Soo ; Kim, Sang-Ho ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 533~540
DOI : 10.5143/JESK.2012.31.4.533
Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.
A Notation Method for Three Dimensional Hand Gesture
Choi, Eun-Jung ; Kim, Hee-Jin ; Chung, Min-K. ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 541~550
DOI : 10.5143/JESK.2012.31.4.541
Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.
The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System
Kim, Jun-Ho ; Lim, Ji-Hyoun ; Moon, Sung-Hyun ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 551~556
DOI : 10.5143/JESK.2012.31.4.551
Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.
An Outlook for Interaction Experience in Next-generation Television
Kim, Sung-Woo ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 557~565
DOI : 10.5143/JESK.2012.31.4.557
Objective: This paper focuses on the new trend of applying NUI(natural user interface) such as gesture interaction into television and investigates on the design improvement needed in application. The intention is to find better design direction of NUI on television context, which will contribute to making new features and behavioral changes occurring in next-generation television more practically usable and meaningful use experience elements. Background: Traditional television is rapidly evolving into next-generation television thanks to the influence of "smartness" from mobile domain. A number of new features and behavioral changes occurred from such evolution are on their way to be characterized as the new experience elements of next-generation television. Method: A series of expert review by television UX professionals based on AHP (Analytic Hierarchy Process) was conducted to check on the "relative appropriateness" of applying gesture interaction to a number of selected television user experience scenarios. Conclusion: It is critical not to indiscriminately apply new interaction techniques like gesture into television. It may be effective in demonstrating new technology but generally results in poor user experience. It is imperative to conduct consistent validation of its practical appropriateness in real context. Application: The research will be helpful in applying gesture interaction in next-generation television to bring optimal user experience in.
The Natural Way of Gestures for Interacting with Smart TV
Choi, Jin-Hae ; Hong, Ji-Young ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 567~575
DOI : 10.5143/JESK.2012.31.4.567
Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.
Gesture based Natural User Interface for e-Training
Lim, C.J. ; Lee, Nam-Hee ; Jeong, Yun-Guen ; Heo, Seung-Il ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 577~583
DOI : 10.5143/JESK.2012.31.4.577
Objective: This paper describes the process and results related to the development of gesture recognition-based natural user interface(NUI) for vehicle maintenance e-Training system. Background: E-Training refers to education training that acquires and improves the necessary capabilities to perform tasks by using information and communication technology(simulation, 3D virtual reality, and augmented reality), device(PC, tablet, smartphone, and HMD), and environment(wired/wireless internet and cloud computing). Method: Palm movement from depth camera is used as a pointing device, where finger movement is extracted by using OpenCV library as a selection protocol. Results: The proposed NUI allows trainees to control objects, such as cars and engines, on a large screen through gesture recognition. In addition, it includes the learning environment to understand the procedure of either assemble or disassemble certain parts. Conclusion: Future works are related to the implementation of gesture recognition technology for a multiple number of trainees. Application: The results of this interface can be applied not only in e-Training system, but also in other systems, such as digital signage, tangible game, controlling 3D contents, etc.
Design of Contactless Gesture-based Rhythm Action Game Interface for Smart Mobile Devices
Ju, Da-Young ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 585~591
DOI : 10.5143/JESK.2012.31.4.585
Objective: The aim of this study is to propose the contactless gesture-based interface on smart mobile devices for especially rhythm action games. Background: Most existing approaches about interactions of smart mobile games are tab on the touch screen. However that way is such undesirable for someone or for sometimes, because of the disabled person, or the inconvenience that users need to touch/tab specific devices. Moreover more importantly, new interaction can derive new possibilities from stranded game genre. Method: In this paper, I present a smart mobile game with contactless gesture-based interaction and the interfaces using computer vision technology. Discovering the gestures which are easy to recognize and research of interaction system that fits to game on smart mobile device are conducted as previous studies. A combination between augmented reality technique and contactless gesture interaction is also tried. Results: The rhythm game allows a user to interact with smart mobile devices using hand gestures, without touching or tabbing the screen. Moreover users can feel fun in the game as other games. Conclusion: Evaluation results show that users make low failure numbers, and the game is able to recognize gestures with quite high precision in real time. Therefore the contactless gesture-based interaction has potentials to smart mobile game. Application: The results are applied to the commercial game application.
Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool
Ha, Sang-Ho ; Park, So-Young ; Hong, Hye-Soo ; Kim, Nam-Hun ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 593~599
DOI : 10.5143/JESK.2012.31.4.593
Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT
sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.
Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System
Kim, Song-Yi ; Noh, Sue-Jin ; Kim, Jin-Man ; Whang, Min-Cheol ; Lee, Eui-Chul ;
Journal of the Ergonomics Society of Korea, volume 31, issue 4, 2012, Pages 601~607
DOI : 10.5143/JESK.2012.31.4.601
Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.