JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Deep Learning Architectures and Applications
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Deep Learning Architectures and Applications
Ahn, SungMahn;
  PDF(new window)
 Abstract
Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer`s) neurons. Shared weights mean that we`re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren`t just propagated backward through layers, they`re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.
 Keywords
Deep learning;Convolutional neural networks;Recurrent neural networks;Error backpropagation algorithm;
 Language
Korean
 Cited by
1.
SNS 데이터 분석을 기반으로 인공지능에 대한 인식 변화 비교 분석,윤유동;양영욱;임희석;

디지털융복합연구, 2016. vol.14. 12, pp.173-182 crossref(new window)
2.
이미지와 텍스트 정보의 카테고리 분류에 의한 SNS 팔로잉 추천 방법,홍택은;신주현;

스마트미디어저널, 2016. vol.5. 3, pp.54-61
3.
하계 전력수요 예측을 위한 딥 러닝 입력 패턴에 관한 연구,신동하;김창복;

한국정보기술학회논문지, 2016. vol.14. 11, pp.127-134 crossref(new window)
1.
A SNS Data-driven Comparative Analysis on Changes of Attitudes toward Artificial Intelligence, Journal of Digital Convergence, 2016, 14, 12, 173  crossref(new windwow)
 References
1.
Bishop, C., Pattern Recognition and Machine Learning, Springer, 2006.

2.
Cho, K., A. Courville and Y. Bengio, "Describing Multimedia Content using Attention-based Encoder-Decoder Networks," arXiv preprint arXiv:1507.01053, 2015.

3.
Fukushima, K., "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, vol.36, no.4(1980), 193-202. crossref(new window)

4.
Gers, F. A., J. Schmidhuber and F. Cummins, "Learning to forget: Continual prediction with LSTM," Neural computation, Vol.12, No.10(2000), 2451-2471. crossref(new window)

5.
Hochreiter, S. and J. Schmidhuber, "Long short-term memory," Neural Compuation, Vol.9(1997), 1735-1780. crossref(new window)

6.
Krizhevsky, A., I. Sutskever and G. Hinton, "ImageNet classification with deep convolutional neural networks," Proc. Advances in Neural Information Processing Systems 25, 2012, 1090-1098.

7.
LeCun, Y., Y. Bengio, and G. Hinton, "Deep Learning," Nature 521, 2015, 436-444. crossref(new window)

8.
LeCun, Y., L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE 86, 1998, 2278-2324. crossref(new window)

9.
Rumelhart, D. E., G. E. Hinton and R. J. Williams, "Learning representations by back-propagating errors," Nature 323, 1986, 533-536. crossref(new window)

10.
Salakhutdinov, R. and Hinton, G., "Deep Boltzmann machines," Proc. International Conference on Artificial Intelligence and Statistics, 2009, 448-455.

11.
Vinyals, O., A. Toshev, S. Bengio and D. Erhan, "Show and tell: a neural image caption generator," Proc. International Conference on Machine Learning, Available at http://arxiv.org/abs/1502.03044 (Downloaded 2014).

12.
http://yann.lecun.com/exdb/mnist/

13.
http://deeplearning.net/tutorial/contents.html

14.
http://karpathy.github.io/2015/05/21/rnn-effectiveness/

15.
http://neuralnetworksanddeeplearning.com/

16.
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

17.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/