JOURNAL BROWSE
Search
Advanced SearchSearch Tips
An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS
Seung, Teak-Young; Kwon, Gi-Chang; Moon, Kwang-Seok; Lee, Suk-Hwan; Kwon, Ki-Ryong;
  PDF(new window)
 Abstract
ADAS(Advanced Driver Assistance Systems) and street furniture information collecting car like as MMS(Mobile Mapping System), they require object location estimation method for recognizing spatial information of object in road images. But, the case of conventional methods, these methods require additional hardware module for gathering spatial information of object and have high computational complexity. In this paper, for a coordinate of road sign in single camera image, a position estimation scheme of object in road images is proposed using the relationship between the pixel and object size in real world. In this scheme, coordinate value and direction are used to get coordinate value of a road sign in images after estimating the equation related on pixel and real size of road sign. By experiments with test video set, it is confirmed that proposed method has high accuracy for mapping estimated object coordinate into commercial map. Therefore, proposed method can be used for MMS in commercial region.
 Keywords
Single Camera;GPS;Relative Distance;Coordinate Estimation;
 Language
Korean
 Cited by
 References
1.
D. Geronimo, A.M. Lopez, A.D. Sappa, and T. Graf, “Survey on Pedestrian Detection for Advanced Driver Assistance Systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 7, pp. 1239-1258, 2010. crossref(new window)

2.
D.K. Seo, Estimation of Object Distance with Single Lens Camera, Master's Thesis of Hanyang University of Technology, 2010.

3.
T. Adelson, and J.Y.A. Wang, “Single Lens Stereo with a Plenoptic Camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 99-106, 1992. crossref(new window)

4.
M. Levoy, and P. Hanrahan, "Light Field Rendering," Proceeding of Special Interest Group on GRAPHics and Interactive Techniques, pp. 31-42, 1996.

5.
R. Ng, M. Levoy, M. Brdif, G. Duval, M. Horowitz, and P. Hanrahan, Light Field Photography with a Hand-held Plenoptic Camera, Tech Report, Stanford University Computer Science, 2005.

6.
T. Georgeiv and C. Intwala, Light Field Camera Design for Integral View Photography, Technical Report, Adobe Systems Incorporated, 2006.

7.
A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. "Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing," Proceeding of Special Interest Group on GRAPHics and Interactive Techniques, Vol. 26, No. 3, article no. 69, 2007.

8.
S. Hiura and T. Matsuyama, "Depth Measurement by the Multifocus Camera," Proceeding of Computer Vision and Pattern Recognition, IEEE Computer Society, pp. 953-961, 1998.

9.
S, Baker and S.K. Nayar, “A Theory of Single-Viewpoint Catadioptric Image Formation,” International J ournal of Computer Vision, Vol. 35, No. 2, pp. 175-196, 1999. crossref(new window)

10.
A. Goshtasby and W.A. Gruver, “Design of a Single-Lens Stereo Camera System,” Pattern Recognition, Vol, 26, pp. 923-936, 1993. crossref(new window)

11.
J.S. Kim, “Effective Road Distance Estimation Using a Vehicle-attached Black Box Camera,” Journal of the Korea Institute of Information and Communication Engineering, Vol. 19, No. 3, pp. 651-658, 2015. crossref(new window)

12.
J.H. Kim, C.Y. Lee, M.H. Lee, D.I. Han, and D.W. Lee, “Monocular Vision Based Relative Position Measurement of an Aircraft,” Journal of The Korean Society for Aeronautical and Space Sciences, Vol. 43, No. 4, pp. 289-295, 2015. crossref(new window)

13.
J.N. Yeom, G.B. Lee, J.J. Park, and B.J. Cho, “Position Estimation System of Moving Object using GPS and Accelerometer,” Journal of Korea Multimedia Society, Vol. 12, No. 4, pp. 600-607, 2009.