JOURNAL BROWSE
Search
Advanced SearchSearch Tips
An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot
Park, Jungkil; Park, Jaebyung;
 
 Abstract
In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each object's point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.
 Keywords
object recognition;depth;point cloud;Levenshtein distance;multi-layer neural network;
 Language
Korean
 Cited by
1.
물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링,김주희;김인철;

제어로봇시스템학회논문지, 2015. vol.21. 11, pp.996-1002 crossref(new window)
1.
Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps, Journal of Institute of Control, Robotics and Systems, 2015, 21, 11, 996  crossref(new windwow)
 References
1.
C. S. Lee, E. S. Park, J. H. Lee, J. H. Kim, and H. K. Kim, "Pillar and vehicle classification using ultrasonic sensors and statistical regression method," Journal of Institute of Control, Robotics and Systems (in Korean), vol. 20, no. 4, pp. 428-436, 2014. crossref(new window)

2.
D. G. Lowe, "Object recognition from local scale-invariant features," Proc. of the seventh International Conference on Computer Vision, vol. 2, pp. 1150-1157, Sep. 1999.

3.
Y. S. Jeon, J. G. Choi, and J. O. Lee, "Development of a SLAM system for small UAVs in indoor environments using gaussian processes," Journal of Institute of Control, Robotics and Systems (in Korean), vol. 20, no. 11, pp. 1098-1102, 2014. crossref(new window)

4.
H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Surf: Speeded up robust features," Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008. crossref(new window)

5.
K. S. Lee, D. H. Kim, S. M. Rho, and E. J. Hwang, "Improving matching performance of SURF using color and relative position," The Journal of Korea Navigation Institute (in Korean), vol. 16, no. 2, pp. 394-399, 2012.

6.
L. C. Caron, Y. Song, D. Filliat, and A. Gepperth, "Neural network based 2D/3D fusion for robotic object recognition," Proc. of European Symposium on Artificial Neural Networks, pp. 127-132, Apr. 2014.

7.
M. Blum, J. T. Springenberg, J. Wulfing, and M. Riedmiller, "A learned feature desciptor for object recognition in RGB-D data," Proc. of IEEE International Conference on Robotics and Automation, pp. 1298-1303, May 2012.

8.
H. Y. Lee, et al., "IR image segmentation using GrabCut," Journal of Korean Institute of Intelligent Systems (in Korean), vol. 21, no. 2, pp. 260-267, 2011. crossref(new window)

9.
R. B. Rusu and S. Cousins, "3D is here: Point Cloud Library (PCL)," Proc. of IEEE International Conference on Robotics and Automation, Sanghai, China, pp. 1-4, May 2011.

10.
M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981. crossref(new window)

11.
R. B. Rusu, "Semantic 3D object maps for everyday manipulation in human living environments," KI-Kunstliche Intelligenz, vol. 24, no. 4, pp. 345-348, 2010. crossref(new window)

12.
K. Klasing, D. Althoff, D. Wolherr, and M. Buss, "Comparison of surface normal estimation methods for range sensing applications," Proc. of IEEE International Conference on Robotics and Automation, pp. 3206-3211, May 2009.

13.
M. Caudill and C. Butler, "Understanding neural networks," Computer Explorations, MIT Press, 1992.

14.
R. O. Duda, P. E. Hart, and D. G. Stork, "Pattern classification," Wiley Interscience, 2000.

15.
H. Zhang and Z. Wang, "A comprehensive review of stability analysis of continuous-time recurrent neural networks," IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 7, pp. 1229-1262, 2014. crossref(new window)

16.
V. I. Levenshtein, "Binary codes capable of correcting deletions, insertions and reversals," Doklady Akademii. Nauk SSSR, vol. 10, no. 8, pp. 707-710, 1966.