DOI QR코드

DOI QR Code

Deep Learning Architectures and Applications

딥러닝의 모형과 응용사례

  • Ahn, SungMahn (College of Business Administration, Kookmin University)
  • 안성만 (국민대학교 경영대학 경영학부)
  • Received : 2016.05.17
  • Accepted : 2016.05.19
  • Published : 2016.06.30

Abstract

Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

딥러닝은 인공신경망(neural network)이라는 인공지능분야의 모형이 발전된 형태로서, 계층구조로 이루어진 인공신경망의 내부계층(hidden layer)이 여러 단계로 이루어진 구조이다. 딥러닝에서의 주요 모형은 합성곱신경망(convolutional neural network), 순환신경망(recurrent neural network), 그리고 심층신뢰신경망(deep belief network)의 세가지라고 할 수 있다. 그 중에서 현재 흥미로운 연구가 많이 발표되어서 관심이 집중되고 있는 모형은 지도학습(supervised learning)모형인 처음 두 개의 모형이다. 따라서 본 논문에서는 지도학습모형의 가중치를 최적화하는 기본적인 방법인 오류역전파 알고리즘을 살펴본 뒤에 합성곱신경망과 순환신경망의 구조와 응용사례 등을 살펴보고자 한다. 본문에서 다루지 않은 모형인 심층신뢰신경망은 아직까지는 합성곱신경망 이나 순환신경망보다는 상대적으로 주목을 덜 받고 있다. 그러나 심층신뢰신경망은 CNN이나 RNN과는 달리 비지도학습(unsupervised learning)모형이며, 사람이나 동물은 관찰을 통해서 스스로 학습한다는 점에서 궁극적으로는 비지도학습모형이 더 많이 연구되어야 할 주제가 될 것이다.

Keywords

References

  1. Bishop, C., Pattern Recognition and Machine Learning, Springer, 2006.
  2. Cho, K., A. Courville and Y. Bengio, "Describing Multimedia Content using Attention-based Encoder-Decoder Networks," arXiv preprint arXiv:1507.01053, 2015.
  3. Fukushima, K., "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, vol.36, no.4(1980), 193-202. https://doi.org/10.1007/BF00344251
  4. Gers, F. A., J. Schmidhuber and F. Cummins, "Learning to forget: Continual prediction with LSTM," Neural computation, Vol.12, No.10(2000), 2451-2471. https://doi.org/10.1162/089976600300015015
  5. Hochreiter, S. and J. Schmidhuber, "Long short-term memory," Neural Compuation, Vol.9(1997), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  6. Krizhevsky, A., I. Sutskever and G. Hinton, "ImageNet classification with deep convolutional neural networks," Proc. Advances in Neural Information Processing Systems 25, 2012, 1090-1098.
  7. LeCun, Y., Y. Bengio, and G. Hinton, "Deep Learning," Nature 521, 2015, 436-444. https://doi.org/10.1038/nature14539
  8. LeCun, Y., L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE 86, 1998, 2278-2324. https://doi.org/10.1109/5.726791
  9. Rumelhart, D. E., G. E. Hinton and R. J. Williams, "Learning representations by back-propagating errors," Nature 323, 1986, 533-536. https://doi.org/10.1038/323533a0
  10. Salakhutdinov, R. and Hinton, G., "Deep Boltzmann machines," Proc. International Conference on Artificial Intelligence and Statistics, 2009, 448-455.
  11. Vinyals, O., A. Toshev, S. Bengio and D. Erhan, "Show and tell: a neural image caption generator," Proc. International Conference on Machine Learning, Available at http://arxiv.org/abs/1502.03044 (Downloaded 2014).
  12. http://yann.lecun.com/exdb/mnist/
  13. http://deeplearning.net/tutorial/contents.html
  14. http://karpathy.github.io/2015/05/21/rnn-effectiveness/
  15. http://neuralnetworksanddeeplearning.com/
  16. http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
  17. http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Cited by

  1. A SNS Data-driven Comparative Analysis on Changes of Attitudes toward Artificial Intelligence vol.14, pp.12, 2016, https://doi.org/10.14400/JDC.2016.14.12.173
  2. Research Trends Analysis of Window Ventilation System Based on Artificial Intelligence vol.17, pp.6, 2017, https://doi.org/10.12813/kieae.2017.17.6.159
  3. Comparative Analysis of Time Series Method for Forecasting the Call Arrival of Call Center vol.16, pp.8, 2018, https://doi.org/10.14801/jkiit.2018.16.8.83
  4. HS 알고리즘을 이용한 CNN의 Hyperparameter 결정 기법 vol.27, pp.1, 2017, https://doi.org/10.5391/jkiis.2017.27.1.022
  5. 이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가 vol.23, pp.1, 2016, https://doi.org/10.13088/jiis.2017.23.1.095
  6. A Study on Word Sense Disambiguation Using Bidirectional Recurrent Neural Network for Korean Language vol.22, pp.4, 2016, https://doi.org/10.9708/jksci.2017.22.04.041
  7. 최근 건축분야의 인공지능 기계학습 연구동향 - 국내·외 연구논문을 중심으로 - vol.33, pp.4, 2016, https://doi.org/10.5659/jaik_sc.2017.33.4.63
  8. 한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성 vol.23, pp.2, 2017, https://doi.org/10.13088/jiis.2017.23.2.071
  9. 인공지능을 이용한 과일 가격 예측 모델 연구 vol.4, pp.2, 2016, https://doi.org/10.17703/jcct.2018.4.2.197
  10. 비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로 vol.24, pp.2, 2018, https://doi.org/10.13088/jiis.2018.24.2.221
  11. Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image vol.24, pp.3, 2016, https://doi.org/10.13088/jiis.2018.24.3.001
  12. 딥러닝을 활용한 실시간 주식거래에서의 매매 빈도 패턴과 예측 시점에 관한 연구: KOSDAQ 시장을 중심으로 vol.27, pp.3, 2018, https://doi.org/10.5859/kais.2018.27.3.123
  13. An Ecology of Technology and the Arts Through a Discussion of “Symmetric Anthropology” and “Soundscape” vol.44, pp.None, 2016, https://doi.org/10.17057/kahoma.2018..44.007
  14. A Study on Fault Classification of Machining Center using Acceleration Data Based on 1D CNN Algorithm vol.18, pp.9, 2019, https://doi.org/10.14775/ksmpe.2019.18.9.029
  15. Terrain Classification Algorithm for Lunar Rover Using a Deep Ensemble Network with High-Resolution Features and Interdependencies between Channels vol.2020, pp.None, 2020, https://doi.org/10.1155/2020/8842227
  16. 기계학습을 활용한 하절기 기온 및 폭염발생여부 예측 vol.13, pp.2, 2020, https://doi.org/10.21729/ksds.2020.13.2.27
  17. 기계학습을 이용한 수출신용보증 사고예측 vol.27, pp.1, 2021, https://doi.org/10.13088/jiis.2021.27.1.083
  18. Prediction of Significant Wave Height in Korea Strait Using Machine Learning vol.35, pp.5, 2016, https://doi.org/10.26748/ksoe.2021.021
  19. 머신러닝 기반 음성분석을 통한 체질량지수 분류 예측 - 한국 성인을 중심으로 vol.33, pp.4, 2021, https://doi.org/10.7730/jscm.2021.33.4.1