JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Optimal Criterion of Classification Accuracy Measures for Normal Mixture
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Optimal Criterion of Classification Accuracy Measures for Normal Mixture
Yoo, Hyun-Sang; Hong, Chong-Sun;
  PDF(new window)
 Abstract
For a data with the assumption of the mixture distribution, it is important to find an appropriate threshold and evaluate its performance. The relationship is found of well-known nine classification accuracy measures such as MVD, Youden's index, the closest-to-(0, 1) criterion, the amended closest-to-(0, 1) criterion, SSS, symmetry point, accuracy area, TA, TR. Then some conditions of these measures are categorized into seven groups. Under the normal mixture assumption, we calculate thresholds based on these measures and obtain the corresponding type I and II errors. We could explore that which classification measure has minimum type I and II errors for estimated mixture distribution to understand the strength and weakness of these classification measures.
 Keywords
Accuracy;classification;discrimination;error;sensitivity;specificity;
 Language
Korean
 Cited by
1.
Optimal thresholds criteria for ROC surfaces,;;

Journal of the Korean Data and Information Science Society, 2013. vol.24. 6, pp.1489-1496 crossref(new window)
2.
Alternative accuracy for multiple ROC analysis,;;

Journal of the Korean Data and Information Science Society, 2014. vol.25. 6, pp.1521-1530 crossref(new window)
3.
대안적인 분류기준: 오분류율곱,홍종선;김효민;김동규;

응용통계연구, 2014. vol.27. 5, pp.773-786 crossref(new window)
1.
Alternative Optimal Threshold Criteria: MFR, Korean Journal of Applied Statistics, 2014, 27, 5, 773  crossref(new windwow)
2.
Optimal thresholds criteria for ROC surfaces, Journal of the Korean Data and Information Science Society, 2013, 24, 6, 1489  crossref(new windwow)
3.
Alternative accuracy for multiple ROC analysis, Journal of the Korean Data and Information Science Society, 2014, 25, 6, 1521  crossref(new windwow)
 References
1.
홍종선, 주재선, 최진수 (2010). 혼합분포에서의 최적분류점, <응용통계연구>, 23, 13-28.

2.
홍종선, 최진수 (2009). ROC와 CAP 곡선에서의 최적분류점, <응용통계연구>, 22, 911-921.

3.
Brasil, P. (2010). Diagnostic test accuracy evaluation for medical professionals, package DiagnosisMed in R.

4.
Cantor, S. B. and Kattan, M.W. (2000). Determining the area under the ROC curve for a binary diagnostic test, Medical Decision Making, 20, 468-470. crossref(new window)

5.
Cantor, S. B., Sun, C. C., Tortolero-Luna, G., Richards-Kortum, R. and Follen, M. (1999). A comparison of C/B ratios from studies using receiver operating characteristic curve analysis, Journal of Clinical Epidemiology, 52, 885-892. crossref(new window)

6.
Connell, F. A. and Koepsell, T. D. (1985). Measures of gain in certainty from a diagnostic test, American Journal of Epidemiology, 121, 744-753. crossref(new window)

7.
Engelmann, B., Hayden, E. and Tasche, D. (2003). Measuring the discriminative power of rating systems, series 2: Banking and Financial Supervision, 01.

8.
Fawcett, T. (2003). ROC graphs: notes and practical considerations for data mining researchers, HP Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304.

9.
Feinstein, A. R. (2002). Principles of Medical Statistics, Chapman & Hall/CRC, Boca Raton, FL.

10.
Finley, J. P. (1884). Tornado predictions, American Meteorological Journal, 1, 85-88.

11.
Freeman, E. A. and Moisen, G. G. (2008). A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and kappa, Ecological Modelling, 217, 48-58. crossref(new window)

12.
Greiner, M. M. and Gardner, I. A. (2000). Epidemiologic issues in the validation of veterinary diagnostic tests, Preventive Veterinary Medicine, 45, 3-22. crossref(new window)

13.
Krzanowski, W. J. and Hand, D. J. (2009). ROC Curves for Continuous Data, Champman & Hall/CRC, Boca Raton, FL.

14.
Lambert, J. and Lipkovich, I. (2008). A macro for getting more out of your ROC curve, SAS Global Forum, 231.

15.
Liu, C., White, M. and Newell1, G. (2009). Measuring the accuracy of species distribution models: a review, 18th World IMACS/MODSIM Congress, http://mssanz.org.au/modsim09.

16.
Moses, L. E., Shapiro, D. and Littenberg, B. (1993). Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations, Statistics in Medicine, 12, 1293-1316. crossref(new window)

17.
Pepe, M. S. (2003). The Statistical Evaluation of Medical Tests for Classification and Prediction, University Press, Oxford.

18.
Perkins, N. J. and Schisterman, E. F. (2006). The inconsistency of "optimal" cutpoints obtained using two criteria based on the receiver operating characteristic curve, American Journal of Epidemiology, 163, 670-675. crossref(new window)

19.
Provost, F. and Fawcett, T. (2001). Robust classification for imprecise environments, Machine Learning, 42, 203-231. crossref(new window)

20.
Sobehart, J. R. and Keenan, S. C. (2001). Measuring default accurately, Credit Risk Special Report, Risk, 14, 31-33.

21.
Tasche, D. (2006). Validation of internal rating systems and PD estimates, arXiv.org, eprint arXiv:physics/0606071.

22.
Velez, D. R., White, B. C., Motsinger, A. A., Bush,W. S., Ritchie, M. D.,Williams, S. M. and Moore, J. H. (2007). A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduction, Genetic Epidemiology, 31, 306-315. crossref(new window)

23.
Ward, C. D. (1986). The differential positive rate, a derivative of receiver operating characteristic curves useful in comparing tests and determining decision levels, Clinical Chemistry, 32, 1428-1429.

24.
Youden, W. J. (1950). Index for rating diagnostic test, Cancer, 3, 32-35. crossref(new window)

25.
Zhou, X. H., Obuchowski, N. A. and McClish, D. K. (2002). Statistical Methods in Diagnostic Medicine, Wiley, New York.