Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Korean Journal of Applied Statistics
Journal Basic Information
Journal DOI :
The Korean Statistical Society
Editor in Chief :
Volume & Issues
Volume 24, Issue 6 - Dec 2011
Volume 24, Issue 5 - Oct 2011
Volume 24, Issue 4 - Aug 2011
Volume 24, Issue 3 - Jun 2011
Volume 24, Issue 2 - Apr 2011
Volume 24, Issue 1 - Feb 2011
Selecting the target year
Spectral Analysis Accompanied with Seasonal Linear Model as Applied to Intra-Day Call Prediction
Shin, Taek-Soo ; Kim, Myung-Suk ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 217~225
DOI : 10.5351/KJAS.2011.24.2.217
In this paper, a seasonal variable selection method using the spectral analysis accompanied with seasonal linear model is suggested. The suggested method is applied to the prediction of intra-day call arrivals at a large North American commercial bank call center and a signi cant intra-month seasonal variable I detected. This newly detected seasonal factor is included in the seasonal linear model and is compared with the seasonal linear models without this variable to see whether the new variable helps to improve the forecasting performance. The seasonal linear model with the new variable outperformed the models without it in one-day-ahead forecasting.
GMM Estimation for Seasonal Cointegration
Park, Suk-Kyung ; Cho, Sin-Sup ; Seon, Byeong-Chan ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 227~237
DOI : 10.5351/KJAS.2011.24.2.227
This paper considers a generalized method of moments(GMM) estimation for seasonal cointegration as the extension of Kleibergen (1999). We propose two iterative methods for the estimation according to whether parameters in the model are simultaneously estimated or not. It is shown that the GMM estimator coincides in form to a maximum likelihood estimator or a feasible two-step estimator. In addition, we derive its asymptotic distribution that takes the same form as that in Ahn and Reinsel (1994).
Test and Analysis for Comovement-Locomotive Hypothesis
Kim, Tae-Ho ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 239~251
DOI : 10.5351/KJAS.2011.24.2.239
The need for statistical analysis to discern the existence and the type of international business comovement has increased as business and economic variations in one country is directly transmitted to business and financial market conditions in another without a long lag. This study performs the statistical tests for th locomotive hypothesis to understand the structural character of the long-run mechanism among Korea-US current and future business movements and the domestic stock market. The U.S. future business prospect, rather than the US current and the domestic current and future business conditions, appears to signi cantl a ect the domestic stock market movement.
Empirical Analysis on the Stress Test Using Credit Migration Matrix
Kim, Woo-Hwan ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 253~268
DOI : 10.5351/KJAS.2011.24.2.253
In this paper, we estimate systematic risk from credit migration (or transition) matrices under "Asymptotic Single Risk Factor" model. We analyzed transition matrices issued by KR(Korea Ratings) and concluded that systematic risk implied on credit migration somewhat coincide with the real economic cycle. Especially, we found that systematic risk implied on credit migration is better than that implied on the default rate. We also emphasize how to conduct a stress test using systematic risk extracted from transition migration. We argue that the proposed method in this paper is better than the usual method that is only considered for the conditional probability of default(PD). We found that the expected loss critically increased when we explicitly consider the change of credit quality in a given portfolio, compared to the method considering only PD.
ROC Curve Fitting with Normal Mixtures
Hong, Chong-Sun ; Lee, Won-Yong ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 269~278
DOI : 10.5351/KJAS.2011.24.2.269
There are many researches that have considered the distribution functions and appropriate covariates corresponding to the scores in order to improve the accuracy of a diagnostic test, including the ROC curve that is represented with the relations of the sensitivity and the specificity. The ROC analysis was used by the regression model including some covariates under the assumptions that its distribution function is known or estimable. In this work, we consider a general situation that both the distribution function and the elects of covariates are unknown. For the ROC analysis, the mixtures of normal distributions are used to estimate the distribution function fitted to the credit evaluation data that is consisted of the score random variable and two sub-populations of parameters. The AUC measure is explored to compare with the nonparametric and empirical ROC curve. We conclude that the method using normal mixtures is fitted to the classical one better than other methods.
Estimating the Automobile Insurance Premium Based on Credibilities
Kim, Yeong-Hwa ; Kim, Mi-Jung ; Kim, Myung-Joon ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 279~292
DOI : 10.5351/KJAS.2011.24.2.279
Credibility theory is one of the most important theories of actuarial science to calculate the proper insurance premium. In this paper, the rule of relative exposure volume, the square root rule, the B
hlmann credibility and B
hlmann-Straub credibility with the basic concept of credibility have been introduced, Also, we estimate new premiums based on these methods for real data. As a result, the rule of relative exposure volume provides the highest accuracy.
A Study for Forecasting Methods of ARMA-GARCH Model Using MCMC Approach
Chae, Wha-Yeon ; Choi, Bo-Seung ; Kim, Kee-Whan ; Park, You-Sung ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 293~305
DOI : 10.5351/KJAS.2011.24.2.293
The volatility is one of most important parameters in the areas of pricing of financial derivatives an measuring risks arising from a sudden change of economic circumstance. We propose a Bayesian approach to estimate the volatility varying with time under a linear model with ARMA(p, q)-GARCH(r, s) errors. This Bayesian estimate of the volatility is compared with the ML estimate. We also present the probability of existence of the unit root in the GARCH model.
Performance Analysis of Internet Traffic Forecasting Model
Kim, S. ; Ha, M.H. ; Jung, J.Y. ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 307~313
DOI : 10.5351/KJAS.2011.24.2.307
In this paper, we compare performance of three models. The Holt-Winters, FARIMA and ARGARCH models, are used in predicting internet traffic data for analysis of traffic characteristics. We first introduce the time series models and apply them to real traffic data to forecast. Finally, we examine which model is the most suitable for explaining the long memory, the characteristics of the traffic material, and compare the respective prediction performance of the models.
Veri cation of Improving a Clustering Algorith for Microarray Data with Missing Values
Kim, Su-Young ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 315~321
DOI : 10.5351/KJAS.2011.24.2.315
Gene expression microarray data often include multiple missing values. Most gene expression analysis (including gene clustering analysis); however, require a complete data matric as an input. In ordinary clustering methods, just a single missing value makes one abandon the whole data of a gene even if the rest of data for that gene was intact. The quality of analysis may decrease seriously as the missing rate is increased. In the opposite aspect, the imputation of missing value may result in an artifact that reduces the reliability of the analysis. To clarify this contradiction in microarray clustering analysis, this paper compared the accuracy of clustering with and without imputation over several microarray data having different missing rates. This paper also tested the clustering efficiency of several imputation methods including our propose algorithm. The results showed it is worthwhile to check the clustering result in this alternative way without any imputed data for the imperfect microarray data.
Network Identification of Major Risk Factor Associated with Delirium by Bayesian Network
Lee, Jea-Young ; Choi, Young-Jin ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 323~333
DOI : 10.5351/KJAS.2011.24.2.323
We analyzed using logistic to find factors with a mental disorder because logistic is the most efficient way assess risk factors. In this paper, we applied data mining techniques that are logistic, neural network, c5.0, cart and Bayesian network to delirium data. The Bayesian network method was chosen as the best model. When delirium data were applied to the Bayesian network, we determined the risk factors associated with delirium as well as identified the network between the risk factors.
Image Segmentation Using Level Set Method with New Speed Function
Kim, Sun-Worl ; Cho, Wan-Hyun ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 335~345
DOI : 10.5351/KJAS.2011.24.2.335
In this paper, we propose a new hybrid speed function for image segmentation using level set. A new proposed speed function uses the region and boundary information of image object for the exact result of segmentation. The region information is defined by the probability information of pixel intensity in a ROI(region-of-interest), and the boundary information is defined by the gradient vector flow obtained from the gradient of image. We show the results of experiment for an various artificial image and real medical image to verify the accuracy of segmentation using proposed method.
An Approximation to the Overshoot in M/E
Bae, Jong-Ho ; Jeong, Ah-Reum ; Kim, Sung-Gon ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 347~357
DOI : 10.5351/KJAS.2011.24.2.347
In this paper, we propose an approximation to the overshoot in M/
/1 queues. Overshoot means the size of excess over the threshold when the workload process of an M/
/1 queue exceeds a prespecified threshold. The distribution,
moments of overshoot have an important role in solving some kind of optimization problems. For the approximation to the overshoot, we propose a formula that is a convex sum of the service time distribution and an exponential distribution. We also do a numerical study to check how exactly the proposed formula approximates the overshoot.
Wavelet-Based Edge Detection Using Local Histogram Analysis in Images
Park, Min-Joon ; Kwon, Min-Jun ; Kim, Gi-Hun ; Shim, Han-Seul ; Kim, Dong-Wook ; Lim, Dong-Hoon ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 359~371
DOI : 10.5351/KJAS.2011.24.2.359
Edge detection in images is an important step in image segmentation and object recognition as preprocessing for image processing. This paper presents a new edge detection using local histogram analysis based on wavelet transform. In this work, the wavelet transform uses three components (horizontal, vertical and diagonal) to find the magnitude of the gradient vector, instead of the conventional approach in which tw components are used. We compare the magnitude of the gradient vector with the threshold that is obtained from a local histogram analysis to conclude that an edge is present or not. Some experimental results for our edge detector with a Sobel, Canny, Scale Multiplication, and Mallat edge detectors on sample images are given and the performances of these edge detectors are compared in terms of quantitative and qualitative measures. Our detector performs better than the other wavelet-based detectors such as Scale Multiplication and Mallat detectors. Our edge detector also preserves a good performance even if the Sobel and Canny detector are sharply low when the images are highly corrupted.
Type I Analysis by Projections
Choi, Jae-Sung ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 373~381
DOI : 10.5351/KJAS.2011.24.2.373
This paper discusses how to get the sums of squares due to treatment factors when Type I Analysis is used by projections for the analysis of data under the assumption of a two-way ANOVA model. The suggested method does not need to calculate the residual sums of squares for the calculation of sums of squares. There-fore, the calculation is easier and faster than classical ANOVA methods. It also discusses how eigenvectors and eigenvalues of the projection matrices can be used to get the calculation of sums of squares. An example is given to illustrate the calculation procedure by projections for unbalanced data.
A Modi ed Entropy-Based Goodness-of-Fit Tes for Inverse Gaussian Distribution
Choi, Byung-Jin ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 383~391
DOI : 10.5351/KJAS.2011.24.2.383
This paper presents a modified entropy-based test of fit for the inverse Gaussian distribution. The test is based on the entropy difference of the unknown data-generating distribution and the inverse Gaussian distribution. The entropy difference estimator used as the test statistic is obtained by employing Vasicek's sample entropy as an entropy estimator for the data-generating distribution and the uniformly minimum variance unbiased estimator as an entropy estimator for the inverse Gaussian distribution. The critical values of the test statistic empirically determined are provided in a tabular form. Monte Carlo simulations are performed to compare the proposed test with the previous entropy-based test in terms of power.
A Study of Sample Size for Two-Stage Cluster Sampling
Song, Jong-Ho ; Jea, Hea-Sung ; Park, Min-Gue ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 393~400
DOI : 10.5351/KJAS.2011.24.2.393
In a large scale survey, cluster sampling design in which a set of observation units called clusters are selected is often used to satisfy practical restrictions on time and cost. Especially, a two stage cluster sampling design is preferred when a strong intra-class correlation exists among observation units. The sample Primary Sampling Unit(PSU) and Secondary Sampling Unit(SSU) size for a two stage cluster sample is determined by the survey cost and precision of the estimator calculated. For this study, we derive the optimal sample PSU and SSU size when the population SSU size across the PSU are di erent by extending the result obtained under the assumption that all PSU have the same number of SSU. The results on the sample size are then applied to the
Korea Hospital Discharge results and is compared to the conventional method. We also propose the optimal sample SSU (discharged patients) size for the
Korea Hospital Discharge Survey.
Monte-Carlo Methods for Social Network Analysis
Huh, Myung-Hoe ; Lee, Yong-Goo ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 401~409
DOI : 10.5351/KJAS.2011.24.2.401
From a social network of n nodes connected by l lines, one may produce centrality measures such as closeness, betweenness and so on. In the past, the magnitude of n was around 1,000 or 10,000 at most. Nowadays, some networks have 10,000, 100,000 or even more than that. Thus, the scalability issue needs the attention of researchers. In this short paper, we explore random networks of the size around n = 100,000 by Monte-Carlo method and propose Monte-Carlo algorithms of computing closeness and betweenness centrality measures to study the small world properties of social networks.
Testing the Relationship between Person-Organizational Value Fit and Performance
Park, Yang-Kyu ; Yeo, Sung-Chil ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 411~424
DOI : 10.5351/KJAS.2011.24.2.411
The studies of congruence in organizational research have explored the concepts such as person-job fit person-organization fit, or person-environment fit. The relevant studies dealt with the fit level as an important influencing factor on the performance. In particular, researchers have agreed that employees can be motivated by the high level fit of person-organization. However, few research developing an alternative methodological approach has been done. For the purpose mentioned above the statistics like D, |D| or
and the Q values such as Q(the correlation between two sets of interval measures) or
(the correlation between two rankings) have been conventionally adopted in spite of numerous methodological problems. In general, these traditional indices such as difference scores, or Q values, are nondirectional and add an extra weight to differences of lager magnitude. Therefore, Edwards (1993) introduced the polynomial regression and the response surface analysis to overcome flaws with conventional approaches. However, the method-ological approaches did not reflect the profile characteristics of person-organizational value fit and wouldn't be a proper solution for the fit level of person-organization value maximizing performance. Hence, this paper investigates alternative methodological approaches, the multivariate polynomial regression and the multiple response surface analysis, to avoid the problems issued from conventional ways.
Reject Inference of Incomplete Data Using a Normal Mixture Model
Song, Ju-Won ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 425~433
DOI : 10.5351/KJAS.2011.24.2.425
Reject inference in credit scoring is a statistical approach to adjust for nonrandom sample bias due to rejected applicants. Function estimation approaches are based on the assumption that rejected applicants are not necessary to be included in the estimation, when the missing data mechanism is missing at random. On the other hand, the density estimation approach by using mixture models indicates that reject inference should include rejected applicants in the model. When mixture models are chosen for reject inference, it is often assumed that data follow a normal distribution. If data include missing values, an application of the normal mixture model to fully observed cases may cause another sample bias due to missing values. We extend reject inference by a multivariate normal mixture model to handle incomplete characteristic variables. A simulation study shows that inclusion of incomplete characteristic variables outperforms the function estimation approaches.
Faculty Performance Evaluation, Annual Salary and Student Course Evaluation
Han, Kyung-Soo ;
Korean Journal of Applied Statistics, volume 24, issue 2, 2011, Pages 435~443
DOI : 10.5351/KJAS.2011.24.2.435
On March 2011, an annual salary plan was applied to new faculty members in National Colleges and Universities. In 2015, all tenured faculty members will receive salaries based on annual performance evaluations. The efforts and accomplishments of faculty are normally assessed according to a standard formula of 40% teaching, 40% research and 20% service. In almost all colleges and universities, student course evaluations may be considered as the only measure of the perceived quality of the courses offered by the faculty member. The mandatory course evaluations are becoming prevalent in Korea. The results of course evaluations do not reflect the fairness and the appropriateness of the quality of the course taught by the faculty member and should not be considered under the teaching evaluation criteria.