• Title/Summary/Keyword: Gesture features

Search Result 92, Processing Time 0.026 seconds

Three Dimensional Hand Gesture Taxonomy for Commands

  • Choi, Eun-Jung;Lee, Dong-Hun;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.483-492
    • /
    • 2012
  • Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.

A Study on Gesture Recognition Using Principal Factor Analysis (주 인자 분석을 이용한 제스처 인식에 관한 연구)

  • Lee, Yong-Jae;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.981-996
    • /
    • 2007
  • In this paper, we describe a method that can recognize gestures by obtaining motion features information with principal factor analysis from sequential gesture images. In the algorithm, firstly, a two dimensional silhouette region including human gesture is segmented and then geometric features are extracted from it. Here, global features information which is selected as some meaningful key feature effectively expressing gestures with principal factor analysis is used. Obtained motion history information representing time variation of gestures from extracted feature construct one gesture subspace. Finally, projected model feature value into the gesture space is transformed as specific state symbols by grouping algorithm to be use as input symbols of HMM and input gesture is recognized as one of the model gesture with high probability. Proposed method has achieved higher recognition rate than others using only shape information of human body as in an appearance-based method or extracting features intuitively from complicated gestures, because this algorithm constructs gesture models with feature factors that have high contribution rate using principal factor analysis.

  • PDF

Implementation of Pen-Gesture Recognition System for Multimodal User Interface (멀티모달 사용자 인터페이스를 위한 펜 제스처인식기의 구현)

  • 오준택;이우범;김욱현
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.121-124
    • /
    • 2000
  • In this paper, we propose a pen gesture recognition system for user interface in multimedia terminal which requires fast processing time and high recognition rate. It is realtime and interaction system between graphic and text module. Text editing in recognition system is performed by pen gesture in graphic module or direct editing in text module, and has all 14 editing functions. The pen gesture recognition is performed by searching classification features that extracted from input strokes at pen gesture model. The pen gesture model has been constructed by classification features, ie, cross number, direction change, direction code number, position relation, distance ratio information about defined 15 types. The proposed recognition system has obtained 98% correct recognition rate and 30msec average processing time in a recognition experiment.

  • PDF

The Types and Features of Gestures in Science Discourse of Elementary Students (초등학생의 과학 담화에서 나타나는 몸짓의 유형과 특징)

  • Na, Jiyeon;Song, Jinwoong
    • Journal of Korean Elementary Science Education
    • /
    • v.31 no.4
    • /
    • pp.450-462
    • /
    • 2012
  • Gestures are a common phenomenon of human communication. There exists little research concerned with the gestures in science education, and most researches of gestures have focused on individual gestures. However, learning occurs through sociocultural interactions with friends, family, teachers, and others in society. Hence, the purpose of this study was to investigate and identify the types and features of gestures which were made by elementary students to communicate with peers in science discourse. A group of six fourth-graders was observed in eight science discourses where they talked about ideas related to thermal concepts. The data was collected through interviews and questionnaires. The analysis of the data showed that students' gestures in science discourses could be classified into seven types: signal iconic gesture, illustrative iconic gesture, personal deictic gesture, object deictic gesture, beat gesture, emotional metaphoric gesture, and content metaphoric gesture. It was also found that these gestures had functions of repeating, supplementing, and replacing utterance to communicate with others. Students frequently expressed scientific terms metaphorically as everyday terms through their gestures. Gestures were shared, imitated, and transferred in the communication process, and students' gestures also made influence on other students' ideas through these processes.

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

Hand Gesture Recognition for Understanding Conducting Action (지휘행동 이해를 위한 손동작 인식)

  • Je, Hong-Mo;Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.263-266
    • /
    • 2007
  • We introduce a vision-based hand gesture recognition fer understanding musical time and patterns without extra special devices. We suggest a simple and reliable vision-based hand gesture recognition having two features First, the motion-direction code is proposed, which is a quantized code for motion directions. Second, the conducting feature point (CFP) where the point of sudden motion changes is also proposed. The proposed hand gesture recognition system extracts the human hand region by segmenting the depth information generated by stereo matching of image sequences. And then, it follows the motion of the center of the gravity(COG) of the extracted hand region and generates the gesture features such as CFP and the direction-code finally, we obtain the current timing pattern of beat and tempo of the playing music. The experimental results on the test data set show that the musical time pattern and tempo recognition rate is over 86.42% for the motion histogram matching, and 79.75% fer the CFP tracking only.

  • PDF

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.

Analysis of Gesture Features on Character Expression of (캐릭터 성격표현에 의한 제스처 특징 분석 : 영화 <아바타>의 '나비족' 캐릭터를 중심으로)

  • Lee, Young-Sook;Choi, Eun-Jin
    • Cartoon and Animation Studies
    • /
    • s.24
    • /
    • pp.155-172
    • /
    • 2011
  • The purpose of this study is to analyze the gesture features on the personalities of Navi characters in . In order to analyze the personality type of characters, the study applied the classification of Enneagram based on script of . The character features are classified according to character types, then the metaphorical character of the expression is obtained through gesture analysis in . Thus, it is possible to set up characters that fit its personalities in Contents of digital image. Also this study suggests creation of attractive characters and expression methods with gesture based personality.

Dynamic gesture recognition using a model-based temporal self-similarity and its application to taebo gesture recognition

  • Lee, Kyoung-Mi;Won, Hey-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2824-2838
    • /
    • 2013
  • There has been a lot of attention paid recently to analyze dynamic human gestures that vary over time. Most attention to dynamic gestures concerns with spatio-temporal features, as compared to analyzing each frame of gestures separately. For accurate dynamic gesture recognition, motion feature extraction algorithms need to find representative features that uniquely identify time-varying gestures. This paper proposes a new feature-extraction algorithm using temporal self-similarity based on a hierarchical human model. Because a conventional temporal self-similarity method computes a whole movement among the continuous frames, the conventional temporal self-similarity method cannot recognize different gestures with the same amount of movement. The proposed model-based temporal self-similarity method groups body parts of a hierarchical model into several sets and calculates movements for each set. While recognition results can depend on how the sets are made, the best way to find optimal sets is to separate frequently used body parts from less-used body parts. Then, we apply a multiclass support vector machine whose optimization algorithm is based on structural support vector machines. In this paper, the effectiveness of the proposed feature extraction algorithm is demonstrated in an application for taebo gesture recognition. We show that the model-based temporal self-similarity method can overcome the shortcomings of the conventional temporal self-similarity method and the recognition results of the model-based method are superior to that of the conventional method.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.