JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Speed-limit Sign Recognition Using Convolutional Neural Network Based on Random Forest
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
  • Journal title : Journal of Broadcast Engineering
  • Volume 20, Issue 6,  2015, pp.938-949
  • Publisher : The Korean Institute of Broadcast and Media Engineers
  • DOI : 10.5909/JBE.2015.20.6.938
 Title & Authors
Speed-limit Sign Recognition Using Convolutional Neural Network Based on Random Forest
Lee, EunJu; Nam, Jae-Yeal; Ko, ByoungChul;
  PDF(new window)
 Abstract
In this paper, we propose a robust speed-limit sign recognition system which is durable to any sign changes caused by exterior damage or color contrast due to light direction. For recognition of speed-limit sign, we apply CNN which is showing an outstanding performance in pattern recognition field. However, original CNN uses multiple hidden layers to extract features and uses fully-connected method with MLP(Multi-layer perceptron) on the result. Therefore, the major demerit of conventional CNN is to require a long time for training and testing. In this paper, we apply randomly-connected classifier instead of fully-connected classifier by combining random forest with output of 2 layers of CNN. We prove that the recognition results of CNN with random forest show best performance than recognition results of CNN with SVM (Support Vector Machine) or MLP classifier when we use eight speed-limit signs of GTSRB (German Traffic Sign Recognition Benchmark).
 Keywords
Convolutional Neural Network;Random forest;speed-limit sign recognition;feature extraction;ADAS;
 Language
Korean
 Cited by
 References
1.
L. Kwangyoung, K. Seunggyu and B. Hyeran, "Real-time Traffic Sign Detection using Color and Shape Feature," Korea Computer Congress 2012, Vol 39, No 1. pp. 504-506, June, 2012.

2.
G. J. L. Lawrence, B. J. Hardy, J. A. Carroll, W. M. S. Donaldson, C. Visvikis and D. A. Peel, “A study on the feasibility of measures relating to the protection of pedestrians and other vulnerable road users,” Final Tech. Report, TRL. Limited, pp. 206, June, 2004.

3.
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Con. Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.

4.
N. Barnes, A. Zelinsky and L. Fletcher, “Real-time speed sign detection using the radial symmetry detector,” IEEE Trans. Intelligent Transportation Systems, Vol. 9, No. 2, pp. 322-332, 2008. crossref(new window)

5.
S. Maldonado-Bascon, S. Lafuente-Arroyo, P. Gil-Jimenez, H. Gomez-Moreno and F. Lopez -Ferreras, “Road-Sign Detection and Recognition Based on Support Vector Machines,” IEEE Tran. Intelligent Transportation Systems, Vol. 8, No. 2, pp. 264-278, June, 2007. crossref(new window)

6.
Y. Aoyagi and T. Asakura, "A Study on Traffic Sign Recognition in Scene Image Using Genetic Algorithms and Neural Networks," IEEE Int. Conf. Industrial Electronics, Control, and Instrumentation, Vol. 3, pp. 1838-1843, Aug. 1996.

7.
G. JaWon, H. MinCheol, K. Byoung Chul and N. Jae-Yeal, “Real-time Speed-Limit Sign Detection and Recognition using Spatial Pyramid Feature and Boosted Random Forest,” 12th International Conference on Image Analysis and Recognition, pp.437-445, July, 2015.

8.
M. Mathias, R. Timofte, R. Benenson and L.V. Gool, “Traffic sign recognition—How far are we from the solution?,” IEEE Int. Con. Neural Networks, pp. 1-8, 2013.

9.
N. Barnes, A Zelinsky and L.S. Fletcher, “Real-time speed sign detection using the radial symmetry detector,” IEEE Trans. Intelligent Transportation Systems, pp. 322-332, 2008. crossref(new window)

10.
Y. Aoyagi and T. Asakura. “Detection and recognition of traffic sign in scene image using genetic algorithms and neural networks.,” SICE-ANNUAL CONF. pp. 1343-1348, 1996.

11.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia, pp. 675-678, November, 2014.

12.
Y. LeCun, C. Cortes and C.J. Burges, “The MNIST database of handwritten digits,” 1998.

13.
Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, pp. 2278-2324, 1998.

14.
P. Sermanet, Y. LeCun, “Traffic sign recognition with multi-scale convolutional networks,” IEEE Int. Conf. Neural Networks , pp. 2809-2813, 2011.

15.
A. de la Escalera, J. Armingol, J. Pastor and F. Rodriguez, “Visual Sign Information Extraction and Identification by Deformable Models for Intelligent Vehicles,” IEEE Tran. Intelligent Transportation Systems, Vol. 5, No. 2, pp. 57-68, June, 2004. crossref(new window)

16.
D.S. Kang, N.C. Griswold and N. Kehtarnavaz, “An invariant traffic sign recognition system based on sequential color processing and geometrical transformation,” Proc of IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 88-93, 1994.

17.
X. Baro, S. Escalera, J. Vitria, O. Pujol and P. Radeva, “Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification,” IEEE Trans. Intelligent Transportation Systems, Vol. 10, No. 1, pp. 113-126, Mar, 2009. crossref(new window)

18.
X. Glorot, A. Bordes and Y. Bengio, "Deep sparse rectifier networks", in Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Vol. 15, pp. 315-323, 2011.

19.
K. Kyungmin, H. Jungwoo and Z. Byoungtak, “Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos,” Journal of korea Intelligent Information System Society, Vol. 42, No. 4, pp. 451-458, 2015.

20.
L. Breiman, “Random Forests,” Machine Learning, vol. 45, pp. 5-32, 2001. crossref(new window)

21.
J. Stallkamp, M.. Schlipsing, J. Salmen and C. Igel, “The German traffic sign recognition benchmark: a multi-class classification competition,” IEEE Conf. Neural Networks, pp. 1453-1460, July, 2011.