Hand Gesture Segmentation Method using a Wrist-Worn Wearable Device Lee, Dong-Woo; Son, Yong-Ki; Kim, Bae-Sun; Kim, Minkyu; Jeong, Hyun-Tae; Cho, Il-Yeon;
Objective: We introduce a hand gesture segmentation method using a wrist-worn wearable device which can recognize simple gestures of clenching and unclenching ones` fist. Background: There are many types of smart watches and fitness bands in the markets. And most of them already adopt a gesture interaction to provide ease of use. However, there are many cases in which the malfunction is difficult to distinguish between the user`s gesture commands and user`s daily life motion. It is needed to develop a simple and clear gesture segmentation method to improve the gesture interaction performance. Method: At first, we defined the gestures of making a fist (start of gesture command) and opening one`s fist (end of gesture command) as segmentation gestures to distinguish a gesture. The gestures of clenching and unclenching one`s fist are simple and intuitive. And we also designed a single gesture consisting of a set of making a fist, a command gesture, and opening one`s fist in order. To detect segmentation gestures at the bottom of the wrist, we used a wrist strap on which an array of infrared sensors (emitters and receivers) were mounted. When a user takes gestures of making a fist and opening one`s a fist, this changes the shape of the bottom of the wrist, and simultaneously changes the reflected amount of the infrared light detected by the receiver sensor. Results: An experiment was conducted in order to evaluate gesture segmentation performance. 12 participants took part in the experiment: 10 males, and 2 females with an average age of 38. The recognition rates of the segmentation gestures, clenching and unclenching one`s fist, are 99.58% and 100%, respectively. Conclusion: Through the experiment, we have evaluated gesture segmentation performance and its usability. The experimental results show a potential for our suggested segmentation method in the future. Application: The results of this study can be used to develop guidelines to prevent injury in auto workers at mission assembly plants.
Alon, J., Athitsos, V., Yuan, Q. and Sclaroff, S., A unified framework for gesture recognition and spatiotemporal gesture segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(9), 1685-1699, 2009.
Junker, H., Amft, O., Lukowicz, P. and Troster, G., Gesture spotting with body-worn inertial sensors to detect user activities, Pattern Recognition, 41(6), 2010-2024, 2008.
Kahol, K., Tripathi, P., Panchanathan, S. and Rikakis, T., Gesture segmentation in complex motion sequences, Proceedings of the International Conference on Image Processing, 105-108, 2003.
Kulkarni, V.S. and Lokhande, S.D., Appearance based recognition of american sign language using gesture segmentation, International Journal on Computer Science and Engineering, 2(3), 560-565, 2010.
Otsu, N., A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62-66, 1979.