• Title/Summary/Keyword: Kinect

Search Result 402, Processing Time 0.026 seconds

A performance improvement for extracting moving objects using color image and depth image in KINECT video system (컬러영상과 깊이영상을 이용한 KINECT 비디오 시스템에서 움직임 물체 추출을 위한 성능 향상 기법)

  • You, Yong-in;Moon, Jong-duk;Jung, Ji-yong;Kim, Man-jae;Kim, Jin-soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.111-113
    • /
    • 2012
  • KINECT is a gesture recognition camera produced by Microsoft Corp. KINECT SDK are widely available and many applications are actively being developed. Especially, KIET (Kinect Image Extraction Technique) has been used mainly for extracting moving objects from the input image. However, KIET has difficulty in extracting the human head due to the absorption of light. In order to overcome this problem, this paper proposes a new method for improving the KIET performance by using both color-image and depth image. Through experimental results, it is shown that the proposed method performs better than the conventional KIET algorithm.

  • PDF

Real-time monitoring system with Kinect v2 using notifications on mobile devices (Kinect V2를 이용한 모바일 장치 실시간 알림 모니터링 시스템)

  • Eric, Niyonsaba;Jang, Jong Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.277-280
    • /
    • 2016
  • Real-time remote monitoring system has an important value in many surveillance situations. It allows someone to be informed of what is happening in his monitoring locations. Kinect v2 is a new kind of camera which gives computers eyes and can generate different data such as color and depth images, audio input and skeletal data. In this paper, using Kinect v2 sensor with its depth image, we present a monitoring system in a space covered by Kinect. Therefore, based on space covered by Kinect camera, we define a target area to monitor using depth range by setting minimum and maximum distances. With computer vision library (Emgu CV), if there is an object tracked in the target space, kinect camera captures the whole image color and sends it in database and user gets at the same time a notification on his mobile device wherever he is with internet access.

  • PDF

Motion correction captured by Kinect based on synchronized motion database (동기화된 동작 데이터베이스를 활용한 Kinect 포착 동작의 보정 기술)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.41-47
    • /
    • 2017
  • In this paper, we present a method for data-driven correction of the noisy motion data captured from a low-end RGB-D camera such as the Kinect device. For this purpose, our key idea is to construct a synchronized motion database captured with Kinect and additional specialized motion capture device simultaneously, so that the database contains a set of erroneous poses from Kinect and their corresponding correct poses from the mocap device together. In runtime, given motion captured data from Kinect, we search the similar K candidate Kinect poses from the database, and synthesize a new motion only by using their corresponding poses from the mocap device. We present how to build such motion database effectively, and provide a method for querying and searching a desired motion from the database. We also adapt the laze learning framework to synthesize the corrected poses from the querying results.

Distance Measurement Using the Kinect Sensor with Neuro-image Processing

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.6
    • /
    • pp.379-383
    • /
    • 2015
  • This paper presents an approach to detect object distance with the use of the recently developed low-cost Kinect sensor. The technique is based on Kinect color depth-image processing and can be used to design various computer-vision applications, such as object recognition, video surveillance, and autonomous path finding. The proposed technique uses keypoint feature detection in the Kinect depth image and advantages of depth pixels to directly obtain the feature distance in the depth images. This highly reduces the computational overhead and obtains the pixel distance in the Kinect captured images.

Accuracy Comparison of Spatiotemporal Gait Variables Measured by the Microsoft Kinect 2 Sensor Directed Toward and Oblique to the Movement Direction (정면과 측면에 위치시킨 마이크로 소프트 키넥트 2로 측정한 보행 시공간 변인 정확성 비교)

  • Hwang, Jisun;Kim, Eun-jin;Hwang, Seonhong
    • Physical Therapy Korea
    • /
    • v.26 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Background: The Microsoft Kinect which is a low-cost gaming device has been studied as a promise clinical gait analysis tool having satisfactory reliability and validity. However, its accuracy is only guaranteed when it is properly positioned in front of a subject. Objects: The purpose of this study was to identify the error when the Kinect was positioned at a $45^{\circ}$ angle to the longitudinal walking plane compare with those when the Kinect was positioned in front of a subject. Methods: Sixteen healthy adults performed two testing sessions consisting of walking toward and $45^{\circ}$ obliquely the Kinect. Spatiotemporal outcome measures related to stride length, stride time, step length, step time and walking speed were examined. To assess the error between Kinect and 3D motion analysis systems, mean absolute errors (MAE) were determined and compared. Results: MAE of stride length, stride time, step time and walking speed when the Kinect set in front of subjects were investigated as .36, .04, .20 and .32 respectively. MAE of those when the Kinect placed obliquely were investigated as .67, .09, .37, and .58 respectively. There were significant differences in spatiotemporal outcomes between the two conditions. Conclusion: Based on our study experience, positioning the Kinect directly in front of the person walking towards it provides the optimal spatiotemporal data. Therefore, we concluded that the Kinect should be placed carefully and adequately in clinical settings.

Control of Humanoid Robot Using Kinect Sensor (Kinect 센서를 사용한 휴머노이드 로봇의 제어)

  • Kim, Oh Sun;Han, Man Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.616-617
    • /
    • 2013
  • This paper introduces a new method that controls a humanoid robot detecting a human motion using a Kinect sensor. Processing the output of a depth seneor of the Kinect sensor, we build a human stick model which represents each joint of human body. We detect a specific motion by calculating the distance and angle between joints. We send the control message to the robot using Bluetooth wireless communication.

  • PDF

Human motion recognition and application using Kinect sensor (Kinect 센서를 사용한 인체동작인식 및 활용)

  • Jeong, Jong-Hun;Han, Man-Su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.625-626
    • /
    • 2013
  • This paper introduces a new method that detects human motions using a Kinect sensor. Also this paper describes a method to mimic the detected human motions. We first build a human stick model by processing the output of Kinect sensor. We detect a specific motion by using the position of each joint of the human stick model and by using the angles between joints.

  • PDF

Marker-less Calibration of Multiple Kinect Devices for 3D Environment Reconstruction (3차원 환경 복원을 위한 다중 키넥트의 마커리스 캘리브레이션)

  • Lee, Suwon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.10
    • /
    • pp.1142-1148
    • /
    • 2019
  • Reconstruction of the three-dimensional (3D) environment is a key aspect of augmented reality and augmented virtuality, which utilize and incorporate a user's surroundings. Such reconstruction can be easily realized by employing a Kinect device. However, multiple Kinect devices are required for enhancing the reconstruction density and for spatial expansion. While employing multiple Kinect devices, they must be calibrated with respect to each other in advance, and a marker is often used for this purpose. However, a marker needs to be placed at each calibration, and the result of marker detection significantly affects the calibration accuracy. Therefore, a user-friendly, efficient, accurate, and marker-less method for calibrating multiple Kinect devices is proposed in this study. The proposed method includes a joint tracking algorithm for approximate calibration, and the obtained result is further refined by applying the iterative closest point algorithm. Experimental results indicate that the proposed method is a convenient alternative to conventional marker-based methods for calibrating multiple Kinect devices. Hence, the proposed method can be incorporated in various applications of augmented reality and augmented virtuality that require 3D environment reconstruction by employing multiple Kinect devices.

A Design and Implementation of Yoga Exercise Program Using Azure Kinect

  • Park, Jong Hoon;Sim, Dae Han;Jun, Young Pyo;Lee, Hongrae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.37-46
    • /
    • 2021
  • In this paper, we designed and implemented a program to measure and to judge the accuracy of yoga postures using Azure Kinect. The program measures all joint positions of the user through Azure Kinect Camera and sensors. The measured values of joints are used as data to determine accuracy in two ways. The measured joint data are determined by trigonometry and Pythagoras theorem to determine the angle of the joint. In addition, the measured joint value is changed to relative position value. The calculated and obtained values are compared to the joint values and relative position values of the desired posture to determine the accuracy. Azure Kinect Camera organizes the screen so that users can check their posture and gives feedback on the user's posture accuracy to improve their posture.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.