Speed-limit Sign Recognition Using Convolutional Neural Network Based on Random Forest

랜덤 포레스트 분류기 기반의 컨벌루션 뉴럴 네트워크를 이용한 속도제한 표지판 인식

  • Received : 2015.09.12
  • Accepted : 2015.11.04
  • Published : 2015.11.30


In this paper, we propose a robust speed-limit sign recognition system which is durable to any sign changes caused by exterior damage or color contrast due to light direction. For recognition of speed-limit sign, we apply CNN which is showing an outstanding performance in pattern recognition field. However, original CNN uses multiple hidden layers to extract features and uses fully-connected method with MLP(Multi-layer perceptron) on the result. Therefore, the major demerit of conventional CNN is to require a long time for training and testing. In this paper, we apply randomly-connected classifier instead of fully-connected classifier by combining random forest with output of 2 layers of CNN. We prove that the recognition results of CNN with random forest show best performance than recognition results of CNN with SVM (Support Vector Machine) or MLP classifier when we use eight speed-limit signs of GTSRB (German Traffic Sign Recognition Benchmark).


Convolutional Neural Network;Random forest;speed-limit sign recognition;feature extraction;ADAS


  1. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Con. Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
  2. G. J. L. Lawrence, B. J. Hardy, J. A. Carroll, W. M. S. Donaldson, C. Visvikis and D. A. Peel, “A study on the feasibility of measures relating to the protection of pedestrians and other vulnerable road users,” Final Tech. Report, TRL. Limited, pp. 206, June, 2004.
  3. L. Kwangyoung, K. Seunggyu and B. Hyeran, "Real-time Traffic Sign Detection using Color and Shape Feature," Korea Computer Congress 2012, Vol 39, No 1. pp. 504-506, June, 2012.
  4. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia, pp. 675-678, November, 2014.
  5. Y. Aoyagi and T. Asakura. “Detection and recognition of traffic sign in scene image using genetic algorithms and neural networks.,” SICE-ANNUAL CONF. pp. 1343-1348, 1996.
  6. N. Barnes, A Zelinsky and L.S. Fletcher, “Real-time speed sign detection using the radial symmetry detector,” IEEE Trans. Intelligent Transportation Systems, pp. 322-332, 2008.
  7. M. Mathias, R. Timofte, R. Benenson and L.V. Gool, “Traffic sign recognition—How far are we from the solution?,” IEEE Int. Con. Neural Networks, pp. 1-8, 2013.
  8. G. JaWon, H. MinCheol, K. Byoung Chul and N. Jae-Yeal, “Real-time Speed-Limit Sign Detection and Recognition using Spatial Pyramid Feature and Boosted Random Forest,” 12th International Conference on Image Analysis and Recognition, pp.437-445, July, 2015.
  9. Y. Aoyagi and T. Asakura, "A Study on Traffic Sign Recognition in Scene Image Using Genetic Algorithms and Neural Networks," IEEE Int. Conf. Industrial Electronics, Control, and Instrumentation, Vol. 3, pp. 1838-1843, Aug. 1996.
  10. S. Maldonado-Bascon, S. Lafuente-Arroyo, P. Gil-Jimenez, H. Gomez-Moreno and F. Lopez -Ferreras, “Road-Sign Detection and Recognition Based on Support Vector Machines,” IEEE Tran. Intelligent Transportation Systems, Vol. 8, No. 2, pp. 264-278, June, 2007.
  11. N. Barnes, A. Zelinsky and L. Fletcher, “Real-time speed sign detection using the radial symmetry detector,” IEEE Trans. Intelligent Transportation Systems, Vol. 9, No. 2, pp. 322-332, 2008.
  12. L. Breiman, “Random Forests,” Machine Learning, vol. 45, pp. 5-32, 2001.
  13. K. Kyungmin, H. Jungwoo and Z. Byoungtak, “Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos,” Journal of korea Intelligent Information System Society, Vol. 42, No. 4, pp. 451-458, 2015.
  14. X. Glorot, A. Bordes and Y. Bengio, "Deep sparse rectifier networks", in Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Vol. 15, pp. 315-323, 2011.
  15. X. Baro, S. Escalera, J. Vitria, O. Pujol and P. Radeva, “Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification,” IEEE Trans. Intelligent Transportation Systems, Vol. 10, No. 1, pp. 113-126, Mar, 2009.
  16. D.S. Kang, N.C. Griswold and N. Kehtarnavaz, “An invariant traffic sign recognition system based on sequential color processing and geometrical transformation,” Proc of IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 88-93, 1994.
  17. A. de la Escalera, J. Armingol, J. Pastor and F. Rodriguez, “Visual Sign Information Extraction and Identification by Deformable Models for Intelligent Vehicles,” IEEE Tran. Intelligent Transportation Systems, Vol. 5, No. 2, pp. 57-68, June, 2004.
  18. P. Sermanet, Y. LeCun, “Traffic sign recognition with multi-scale convolutional networks,” IEEE Int. Conf. Neural Networks , pp. 2809-2813, 2011.
  19. Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, pp. 2278-2324, 1998.
  20. Y. LeCun, C. Cortes and C.J. Burges, “The MNIST database of handwritten digits,” 1998.
  21. J. Stallkamp, M.. Schlipsing, J. Salmen and C. Igel, “The German traffic sign recognition benchmark: a multi-class classification competition,” IEEE Conf. Neural Networks, pp. 1453-1460, July, 2011.