• 제목/요약/키워드: discriminant function

검색결과 332건 처리시간 0.032초

세 집단 판별분석 상황에서의 영향함수 유도 및 그 응용 (Derivation and Application of In uence Function in Discriminant Analysis for Three Groups)

  • 이혜정;김홍기
    • 응용통계연구
    • /
    • 제24권5호
    • /
    • pp.941-949
    • /
    • 2011
  • 본 논문에서는 세 집단만을 판별분석 할 경우에 계산되는 오분류확률에 영향을 미치는 이상치 판별을 목적으로 하며, 쉽게 응용 가능한 간단한 영향함수식을 제시하였다. 그리고 제시된 수식을 이용하여 안면 데이터로 세 가지 사상체질을 분류해보고 각 관찰값들의 오분류확률에 대한 영향함수를 계산하였다. 이상치를 제거하고 재 판별분석을 하는 데 있어, 오분류확률에 대한 영향함수를 이용하는 것이 효율적인 방법임을 확인하였다.

한국 남성의 얼굴 피부색 판별을 위한 색채 변수에 관한 연구 (A Study on the Discriminant Variables of Face Skin Colors for the Korean Males)

  • 김구자
    • 한국의류학회지
    • /
    • 제29권7호
    • /
    • pp.959-967
    • /
    • 2005
  • The color of apparels has the interaction of the face skin colors of the wearers. This study was carried out to classify the face skin colors of Korean males into several similar face skin colors in order to extract favorable colors which flatter to their face skin colors. The criterion that select the new subjects who have the classified face skin colors have to be decided. With color spectrometer, JX-777, face skin colors of subjects were measured quantitatively and classified into three clusters that had similar hue, value and chroma with Munsell Color System. Sample size was 418 Korean males and other 15 of new males subjects. Data were analyzed by K-means cluster analysis, ANOVA, Duncan multiple range test, Stepwise discriminant analysis using SPSS Win. 12. Findings were as follows: 1. 418 subjects who have YR colors were clustered into 3 kinds of face skin color groups. 2. Discriminant variables of face skin colors was 4 variables : L value of forehead, v value of cheek, c value of forehead, and b value of cheek from standardized canonical discriminant function coefficient 1 and c value of forehead, L value of forehead, b value of cheek. and L value of cheek from standardized canonical discriminant function coefficient 2. 3. Hit ratio of type 1 was $92.3\%$, of type 2 was $96.5\%$ and of type 3 was $92.6\%$ by the canonical discriminant function of 4 variables. 4. The canonical discriminant function equation 1 and 2 were calculated with the unstandardized canonical discriminant function coefficient and constant, the cutting score, and range of the score were computed. 5. The criterion that select the new subjects who have the classified face skin colors was decided.

한국 여성의 얼굴 피부색 판별을 위한 색채 변수에 관한 연구 (A Study on the Discriminant Variables of Face Skin Colors for the Korean Females)

  • 김구자;정혜원
    • 한국의류학회지
    • /
    • 제29권7호
    • /
    • pp.978-986
    • /
    • 2005
  • The color of apparel products have a close relationship with the face skin colors of consumers. In order to extract the favorable colors which flatter to consumer's face skin colors, this study was carried our to classify the face skin colors of Korean females. The criteria that select new subjects who have the classified face skin colors have to be decided. With color spectrometer, JX-777, face skin colors of subjects were measured and classified into three clusters that had similar hue, value and chroma with Munsell Color System. Sample size was 324 Korean females and other new 10 college girls. Data were analyzed by K-means cluster analysis, ANOVA, Duncan multiple range test, Stepwise discriminant analysis using SPSS Win. 12. Findings were as follows: 1. 324 subjects who have YR colors were clustered into 3 face skin color groups. 2. Discriminant variables of face skin colors were 5 variables : b value of cheek, V value of forehead, L value of cheek, C value of forehead and H value of cheek by the standardized canonical discriminant function coefficient 1. 3. Hit ratio of type 1 was $96.8\%$, of type 2 was $94.9\%$, of type 3 was $100.0\%$ and mean of hit ratio was $96.9\%$ by canonical discriminant function of 5 variables. 4. With the unstandardized canonical discriminant function coefficient and constant, canonical discriminant function equation 1 and 2 were calculated. And cutting score and range of score of the classified types were computed. The criteria that select the new subjects were decided.

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

관능특성 및 판별함수를 이용한 한우고기 맛 등급 분석 (Palatability Grading Analysis of Hanwoo Beef using Sensory Properties and Discriminant Analysis)

  • 조수현;서그러운달님;김동훈;김재희
    • 한국축산식품학회지
    • /
    • 제29권1호
    • /
    • pp.132-139
    • /
    • 2009
  • 본 연구에서는 1,300명의 소비자들이 직접 먹어보고 평가한 한우고기 데이터를 이용하여 쇠고기 맛 등급을 구분 해 내기 위한 판별분석 방법들을 비교하였다. 한우 관능평가의 주요 세 변수인 연도, 다즙성, 향미를 포함한 정준 판별분석과 대표적인 맛 변수로 여겨지는 전반적인 기호도 만을 이용하여 선형판별분석과 비모수 판별분석을 하였다. 전반적인 기호도와 같은 한 개의 변수만을 사용할 경우 두 가지 모두 비슷한 분류율을 나타내지만 선형판별 함수는 이해와 사용 측면에서 장점이 있었던 반면에 비모수적 방법은 커널함수와 띠폭에 대한 선택이 불편하지만 잘 선택하면 정확한 분류율을 높일 수 있는 장점이 있었다. 그러나 다른 정보를 가진 변수들이 있음에도 불구하고 한 개의 변수만을 이용한 판별 분석은 판별에 영향을 미치는 다른 중요한 변수들의 정보를 활용하지 못한다는 문제점이 있다. 한편, 정준판별분석의 경우 정준판별함수의 오분류율이 일변량 선형 판별함수와 비모수 판별함수의 오분류율에 비해 크게 떨어지지 않으면서 분포에 대한 특별한 가정이 필요하지 않아 통계적 가정이 까다롭지 않고 또한 맛에 중요한 요인인 연도, 다즙성, 향미의 세 개변수를 모두 사용하므로 맛 정보를 최대로 활용한다는 장점이 있었다. 따라서 본 연구결과 연도, 다즙성, 향미의 세가지 변수 정보를 모두 포함한 다변량 정준판별분석법을 이용하는 것이 맛 등급을 구분하는데 가장 적절할 것으로 판단되었다.

On Testing Fisher's Linear Discriminant Function When Covariance Matrices Are Unequal

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • 제22권2호
    • /
    • pp.325-337
    • /
    • 1993
  • This paper propose two test statistics which enable us to proceed the variable selection in Fisher's linear discriminant function for the case of heterogeneous discrimination with equal training sample size. Simultaneous confidence intervals associated with the test are also given. These are exact and approximate results. The latter is based upon an approximation of a linear sum of Wishart distributions with unequal scale matrices. Using simulated sampling experiments, powers of the two tests have been tabulated, and power comparisons have been made between them.

  • PDF

기계시각을 이용한 박피 마늘 선별 알고리즘 개발 (I) - 베이즈 판별함수와 신경회로망에 의한 설별 정확도 비교 - (Development of Algorithms for Sorting Peeled Garlic Using Machnie Vison (I) - Comparison of sorting accuracy between Bayes discriminant function and neural network -)

  • 이상엽;이수희;노상하;배영환
    • Journal of Biosystems Engineering
    • /
    • 제24권4호
    • /
    • pp.325-334
    • /
    • 1999
  • The aim of this study was to present a groundwork for development of a sorting system of peeled garlics using machine vision. Images of various garlic samples such as sound, partially defective, discolored, rotten and un-peeled were obtained with a B/W machine vision system. Sorting factors which were based on normalized histogram and statistical analysis(STEPDISC Method) had good separability for various garlic samples. Bayes discriminant function and neural network sorting algorithms were developed with the sample images and were experimented on various garlic samples. It was showed that garlic samples could be classified by sorting algorithm with average sorting accuracies of 88.4% by Bayes discriminant function and 93.2% by neural network.

  • PDF

기술금융을 위한 부실 가능성 예측 최적 판별모형에 대한 연구 (A Study on the Optimal Discriminant Model Predicting the likelihood of Insolvency for Technology Financing)

  • 성웅현
    • 기술혁신학회지
    • /
    • 제10권2호
    • /
    • pp.183-205
    • /
    • 2007
  • 본 연구는 기술력평가에 근거해서 중소기업 부실예측 가능성을 사전에 예측할 수 있는 최적 판별 모형을 개발 제안하였다. 판별모형에 포함될 설명변수는 요인분석과 판별모형의 단계별 선택방법에 의하여 선정되었다. 분석결과 선형판별모형이 로지스틱판별모형보다 임계확률 관점에서 적절한 것으로 나타났다. 최적 선형판별모형의 분류 정분류율은 70.4%, 분류 예측력은 67.5%로 나타났다. 최적 선형판별모형의 활용도를 높이기 위해서 확실 범주와 유보범주를 구분할 수 있는 경계값을 설정하였다. 분석결과를 활용하면 기술금융 취급기관은 부실위험 평가와 더불어 기술금융 신청기업의 순위를 부여할 때 유용하게 사용할 수 있을 것으로 기대된다.

  • PDF

Local Influence Assessment of the Misclassification Probability in Multiple Discriminant Analysis

  • Jung, Kang-Mo
    • Journal of the Korean Statistical Society
    • /
    • 제27권4호
    • /
    • pp.471-483
    • /
    • 1998
  • The influence of observations on the misclassification probability in multiple discriminant analysis under the equal covariance assumption is investigated by the local influence method. Under an appropriate perturbation we can get information about influential observations and outliers by studying the curvatures and the associated direction vectors of the perturbation-formed surface of the misclassification probability. We show that the influence function method gives essentially the same information as the direction vector of the maximum slope. An illustrative example is given for the effectiveness of the local influence method.

  • PDF