• 제목/요약/키워드: Gesture

검색결과 929건 처리시간 0.023초

Three Dimensional Hand Gesture Taxonomy for Commands

  • Choi, Eun-Jung;Lee, Dong-Hun;Chung, Min-K.
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.483-492
    • /
    • 2012
  • Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.

제스처 제안 시스템의 설계 및 구현에 관한 연구 (A Study on Design and Implementation of Gesture Proposal System)

  • 문성현;윤태현;황인성;김석규;박준;한상영
    • 한국멀티미디어학회논문지
    • /
    • 제14권10호
    • /
    • pp.1311-1322
    • /
    • 2011
  • 제스처는 빠르고 간편하게 명령을 수행 할 수 있어서 스마트폰과 태블릿PC, 웹브라우저를 비롯한 수많은 어플리케이션에서 사용되고 있다. 어플리케이션에 제스처를 적용하기 위해서는 제스처를 디자인해야하는데 사용자와 시스템 두 가지 측면을 모두 고려하여 디자인해야 한다. 이러한 제스처 디자인을 도와주고자 몇몇 툴들이 개발되어왔다. 그럼에도 불구하고 제스처를 디자인하기 위해서는 다음과 같은 어려움이 남아있다. 첫째, 모든 제스처를 사람이 직접 디자인해야 한다. 둘째, 디자인한 제스처를 인식기가 올바르게 인식할 수 있도록 반복적인 트레이닝 작업을 해야 한다. 본 논문에서는 보다 간편한 제스처 디자인 환경을 제공해 주고자 자동화 트레이닝, 제스처 제안, 제스처 생성을 제안하였다. 이를 통해 제스처를 트레이닝 시킬 필요가 없어졌고 생성된 제스처와 수집된 제스처의 마할라노비스 거리를 계산하여 이 중 인식이 잘 될 가능성이 높은 순서대로 제스처들을 제안해 줌으로서 모든 제스처를 직접 디자인해야 하는 노력을 줄일 수 있게 되었다.

궤적의 방향 변화 분석에 의한 제스처 인식 알고리듬 (Gesture Recognition Algorithm by Analyzing Direction Change of Trajectory)

  • 박장현;김민수
    • 한국정밀공학회지
    • /
    • 제22권4호
    • /
    • pp.121-127
    • /
    • 2005
  • There is a necessity for the communication between intelligent robots and human beings because of wide spread use of them. Gesture recognition is currently being studied in regards to better conversing. On the basis of previous research, however, the gesture recognition algorithms appear to require not only complicated algorisms but also separate training process for high recognition rates. This study suggests a gesture recognition algorithm based on computer vision system, which is relatively simple and more efficient in recognizing various human gestures. After tracing the hand gesture using a marker, direction changes of the gesture trajectory were analyzed to determine the simple gesture code that has minimal information to recognize. A map is developed to recognize the gestures that can be expressed with different gesture codes. Through the use of numerical and geometrical trajectory, the advantages and disadvantages of the suggested algorithm was determined.

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.

Conditions of Applications, Situations and Functions Applicable to Gesture Interface

  • Ryu, Tae-Beum;Lee, Jae-Hong;Song, Joo-Bong;Yun, Myung-Hwan
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.507-513
    • /
    • 2012
  • Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

SVM을 이용한 동적 동작인식: 체감형 동화에 적용 (Dynamic Gesture Recognition using SVM and its Application to an Interactive Storybook)

  • 이경미
    • 한국콘텐츠학회논문지
    • /
    • 제13권4호
    • /
    • pp.64-72
    • /
    • 2013
  • 본 연구에서는 다차원의 데이터 인식에 유리한 SVM을 이용한 동적 동작인식 알고리즘을 제안한다. 우선, Kinect 비디오 프레임에서 동작의 시작과 끝을 찾아 의미있는 동작 프레임을 분할하고, 프레임 수를 동일하게 정규화시킨다. 정규화된 프레임에서 인체 모델에 기반한 인체 부위의 위치와 부위 사이의 관계를 이용한 동작 특징을 추출하여 동작인식을 수행한다. 동작인식기인 C-SVM는 각 동작에 대해 positive 데이터와 negative 데이터로 구성된 학습 데이터로 학습된다. 최종 동작 선정은 각 C-SVM의 결과값 중 가장 큰 값을 갖는 동작으로 한다. 제안하는 동작인식 알고리즘은 플래시 구연동화에서 더 나아가 유아가 능동적으로 구연동화에 참여할 수 있도록 고안된 체감형 동화 콘텐츠에 동작 인터페이스로 적용되었다.

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • 대한인간공학회지
    • /
    • 제36권2호
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

디지털 데스크에서의 실시간 Fingertip Gesture 인식 (Real-time Fingertip Gesture Recognition on the Digital Desk)

  • 문채현;강현;김항준
    • 융합신호처리학회 학술대회논문집
    • /
    • 한국신호처리시스템학회 2003년도 하계학술대회 논문집
    • /
    • pp.26-29
    • /
    • 2003
  • 최근 컴퓨팅 환경의 동향은 사람과 컴퓨터간의 좀더 자연스러운 인터페이스와 사용자의 눈에 보이지 않는 하드웨어의 계발이다. 디지털 데스크는 이 두 아이디어가 결합된 컴퓨팅 환경의 대표적인 예이다. 즉, 디지털 데스크에서 아무 장치도 하지 않은 사용자의 fingertip을 컴퓨터의 입력 장치로 사용하는 것이다. 본 논문은 디지털 데스크에서 사용자 fingertip의 이동경로를 추출하고, 추출된 이동경로로 symbolic gesture를 인식하는 방법을 제안한다. 제안된 방법은 Fingertip tracker, Gesture mode selector, 그리고 Symbolic gesture recognizer 세 개의 모듈로 구성된다. Fingertip tracker는 카메라로부터 입력되는 영상에서 사용자 fingertip의 이동경로를 추출하고, Gesture mode selector는 추출한 fingertip의 이동경로가 symbolic gesture인지를 구분한다. Symbolic gesture recognizer는 추출된 fingertip의 이동경로로 symbolic gesture를 인식한다. 이 방법을 문서교정 부호를 인식하여 전자문서를 교정하는 시스템에 적용해 본 결과 좋은 인식 결과를 얻을 수 있었다.

  • PDF

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.