On Robust Principal Component using Analysis Neural Networks

신경망을 이용한 로버스트 주성분 분석에 관한 연구

  • Kim, Sang-Min (Dept. of Computer Information Processing, Kimchun Junior College) ;
  • Oh, Kwang-Sik (Dept. of Statistics, Catholic University of Taegu-Hyosung) ;
  • Park, Hee-Joo (Dept. of Computer Science, Kyungbook Sanup University)
  • 김상민 (김천전문대학 전산정보처리과) ;
  • 오광식 (대구효성가톨릭대학교 통계학과) ;
  • 박희주 (경북산업대학교 전자계산학과)
  • Published : 1996.05.30


Principal component analysis(PCA) is an essential technique for data compression and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition, and image processing. Oja(1992) found that a linear neuron with constrained Hebbian learning rule can extract the principal component by using stochastic gradient ascent method. In practice real data often contain some outliers. These outliers will significantly deteriorate the performances of the PCA algorithms. In order to make PCA robust, Xu & Yuille(1995) applied statistical physics to the problem of robust principal component analysis(RPCA). Devlin et.al(1981) obtained principal components by using techniques such as M-estimation. The propose of this paper is to investigate from the statistical point of view how Xu & Yuille's(1995) RPCA works under the same simulation condition as in Devlin et.al(1981).