DOI QR코드

DOI QR Code

EBP와 OVSSA의 특성을 이용하는 분류 알고리즘

Classification algorithm using characteristics of EBP and OVSSA

  • 이종찬 (청운대학교 인터넷학과)
  • 투고 : 2017.12.13
  • 심사 : 2018.02.20
  • 발행 : 2018.02.28

초록

본 논문은 다층을 갖는 네트워크를 가장 효율적으로 학습하는 것은 결국 최적의 가중치 벡터의 집합을 찾아가는 과정이라는 간단한 접근 방법을 기본으로 하고 있다. 일반적인 학습 문제의 단점을 극복하기 위해 제안 모델에서는 EBP와 OVSSA의 특징들을 결합한 방법을 사용한다. 즉 EBP가 지역 최소치에 빠질 수 있는 성질을 보강하기 위해 OVSSA의 확률이론으로 빠져나갈 수 있도록 제안 방법은 각각 알고리즘의 장점만을 취하여 하나의 모델을 구성한다. 제안 알고리즘에서는 EBP에서 오류를 줄이기 위한 방법들을 에너지함수로 사용하고, 이 에너지를 OVSSA로 최소화 하는 방법을 사용하였다. 두 가지의 상이한 성질을 가지는 알고리즘이 합쳐질 수 있음을 간단한 실험 결과를 통해 확인한다.

This paper is based on a simple approach that the most efficient learning of a multi-layered network is the process of finding the optimal set of weight vectors. To overcome the disadvantages of general learning problems, the proposed model uses a combination of features of EBP and OVSSA. In other words, the proposed method can construct a single model by taking advantage of each algorithm so that it can escape to the probability theory of OVSSA in order to reinforce the property that EBP falls into local minimum value. In the proposed algorithm, methods for reducing errors in EBP are used as energy functions and the energy is minimized to OVSSA. A simple experimental result confirms that two algorithms with different properties can be combined.

키워드

참고문헌

  1. D. E. Rumelhart, G. E. Hinton & R. J. Williams (1986), Learning internal representations by error propagation, PDP, I, 318-362.
  2. P. D. Wasserman. (1990), A combined back- propagation / cauchy machine network, Journal of Neural Network Computing, 34-40.
  3. Y. LeCun, Y. Bengio & G. Hinton. (2015, May) ,Deep learning, Nature, 521, 436-444. https://doi.org/10.1038/nature14539
  4. G. Hinton & R. Salakhutdinov. (2006, July) Reducing the dimensionality of data with neural networks, Science, 313.
  5. V. Nair & G. E. Hinton. (2010), Rectified linear units improve restricted boltzmann machines, International Conference on Machine Learning.
  6. M. Ranzato, & M. Szummer. (2008), Semi- supervised learning of compact document representations with deep networks. International Conference on Machine Learning, 792-799.
  7. J. Schmidhuber. (2015), Deep learning in neural networks : An overview, Neural networks, 1-88.
  8. J.C.Lee & W.D.Lee. (1994), Pattern classification model based on an optimization tool, International conference on Neural Information Processing, 1744-1748.
  9. H. Jeong. (1988, Oct), Learning scheme for neural networks by simulated annealing with back-propagation, Workshop for Information Science society, Korean Federation Science and Technology Societies, 15-20.
  10. N. Baba & M. Kozaki. (1992, June), An intelligent forecasting system of stock price using neural networks, IJCNN, I, 371-377.
  11. K.Lee, K.Cho, W.Lee & S,Lee. (1992, June), Mean field annealing with continuous variables and its application to the quantification analysis Problem, IJCNN, II, 431-435.
  12. M.Kim, H.Choi & W.D.Lee. (1992, June), Fuzzy clustering using extended MFA for continuous valued state space, IJCNN, II, 733-738.
  13. G. Wang & N. Ansari. (1997), Optimal broadcast scheduling in packet radio networks using mean field annealing, IEEE Journal on selected areas in communications, 15(2).
  14. G. D. Kim & Y. H. Kim. (2017), A survey on oil spill and weather forecast using machine learning based on neural networks and statistical methods, Journal of the Korea Convergence Society, 8(10), 1-8. https://doi.org/10.15207/JKCS.2017.8.10.001
  15. Y.D.Yun, Y.W.Yang, H.S.Ji & H.S.Lim. (2017), Development of smart senior classification model based on activity profile using machine learning method, Journal of the Korea Convergence Society, 8(1), 25-34. https://doi.org/10.15207/JKCS.2017.8.1.025