Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Korean Journal of Applied Statistics
Journal Basic Information
Journal DOI :
The Korean Statistical Society
Editor in Chief :
Volume & Issues
Volume 20, Issue 3 - Nov 2007
Volume 20, Issue 2 - Jul 2007
Volume 20, Issue 1 - Mar 2007
Selecting the target year
Prediction for 2006 Germany World Cup using Bradley-Terry Model
Kim, Do-Hyun ; Lee, Sang-In ; Kim, Yong-Dai ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 205~218
DOI : 10.5351/KJAS.2007.20.2.205
It is our greatest concern of Korean team to enter round of 16. The past football results are the most important data for making a prediction. And we know that the home advantage is also considerable factor and there are many unobservable factors. However, there are few matches between the participants and even not the results for some nations. To overcome this difficulty, we model the network of results and consider other factors. We predict 2006 Germany World Cup results using modified the Bradley-Terry model.
Time Series Models for Performance Evaluation of Network Traffic Forecasting
Kim, S. ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 219~227
DOI : 10.5351/KJAS.2007.20.2.219
The time series models have been used to analyze and predict the network traffic. In this paper, we compare the performance of the time series models for prediction of network traffic. The feasibility study showed that a class of nonlinear time series models can be outperformed than the linear time series models to predict the network traffic.
Evaluations of Small Area Estimations with/without Spatial Terms
Shin, Key-Il ; Choi, Bong-Ho ; Lee, Sang-Eun ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 229~244
DOI : 10.5351/KJAS.2007.20.2.229
Among the small area estimation methods, it has been known that hierarchical Bayesian(HB) approach is the most reasonable and effective method. However any model based approaches need good explanatory variables and finding them is the key role in the model based approach. As the lacking of explanatory variables, adopting the spatial terms in the model was introduced. Here in this paper, we evaluate the model based methods with/without spatial terms using the diagnostic methods which were introduced by Brown et al. (2001). And Economic Active Population Survey(2005) is used for data analysis.
A Study on the Optimal Size of Dum in Professional
Kim, Jin-Ho ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 245~255
DOI : 10.5351/KJAS.2007.20.2.245
In playing Baduk, Black plays first and, thus, can control the pace of a game. Usually a player with black stones plays conservatively to maintain the advantage of playing first. The purpose of dum is to compensate for Black having the first move. Currently, 6.5-point dum is applied in Korea and Japan, while 8-point dum is applied in Taiwan and China. In this study we investigated whether the current size of dum(6.5 points) is optimal, by statistically analyzing and comparing the advantage of taking Black across two data sets with different dun rules. Under the 5.5-point handicap, Blacks won significantly more games than Whites, revealing the advantage of playing first. However, with 6.5-point dum, Black's advantage of playing first was not significant. In Conclusion, implications and future research areas are discussed.
Confidence Intervals for a Linear Function of Binomial Proportions Based on a Bayesian Approach
Lee, Seung-Chun ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 257~266
DOI : 10.5351/KJAS.2007.20.2.257
It is known that Agresti-Coull approach is an effective tool for the construction of confidence intervals for various problems related to binomial proportions. However, the Agrest-Coull approach often produces a conservative confidence interval. In this note, confidence intervals based on a Bayesian approach are proposed for a linear function of independent binomial proportions. It is shown that the Bayesian confidence interval slightly outperforms the confidence interval based on Agresti-Coull approach in average sense.
Robust Designs of the Second Order Response Surface Model in a Mixture
Lim, Yong-Bin ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 267~280
DOI : 10.5351/KJAS.2007.20.2.267
Various single-valued design optimality criteria such as D-, G-, and V-optimality are used often in constructing optimal experimental designs for mixture experiments in a constrained region R where lower and upper bound constraints are imposed on the ingredients proportions. Even though they are optimal in the strict sense of particular optimality criterion used, it is known that their performance is unsatisfactory with respect to the prediction capability over a constrained region. (Vining et at., 1993; Khuri et at., 1999) We assume the quadratic polynomial model as the mixture response surface model and are interested in finding efficient designs in the constrained design space for a mixture. In this paper, we make an expanded list of candidate design points by adding interior points to the extreme vertices, edge midpoints, constrained face centroids and the overall centroid. Then, we want to propose a robust design with respect to D-optimality, G-optimality, V-optimality and distance-based U-optimality. Comparing scaled prediction variance quantile plots (SPVQP) of robust designs with that of recommended designs in Khuri et al. (1999) and Vining et al. (1993) in the well-known examples of a four-component fertilizer experiment as well as McLean and Anderson's Railroad Flare Experiment, robust designs turned out to be superior to those recommended designs.
Variance Estimation for General Weight-Adjusted Estimator
Kim, Jae-Kwang ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 281~290
DOI : 10.5351/KJAS.2007.20.2.281
Linear estimator, a weighted sum of the sample observation, is commonly adopted to estimate the finite population parameters such as population totals in survey sampling. The weight for a sampled unit is often constructed by multiplying the base weight, which is the inverse of the first-order inclusion probability, by an adjustment term that takes into account of the auxiliary information obtained throughout the population. The linear estimator using the weight adjustment is often more efficient than the one using only the bare weight, but its valiance estimation is more complicated. We discuss variance estimation for a general class of weight-adjusted estimator. By identifying that the weight-adjusted estimator can be viewed as a function of estimated nuisance parameters, where the nuisance parameters were used to incorporate the auxiliary information, we derive a linearization of the weight-adjusted estimator using a Taylor expansion. The method proposed here is quite general and can be applied to wide class of the weight-adjusted estimators. Some examples and results from a simulation study are presented.
Estimation of Economic Risk Capital of Insurance Company using the Extreme Value Theory
Yeo, Sung-Chil ; Chang, Dong-Han ; Lee, Byung-Mo ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 291~311
DOI : 10.5351/KJAS.2007.20.2.291
With a series of unexpected huge losses in the financial markets around the world recently, especially in the insurance market with extreme loss cases such as catastrophes, there is an increasing demand for risk management for extreme loss exposures due to high unpredictability of those risks. For extreme risk management, to make a maximum use of the information concerning the tail part of a loss distribution, EVT(Extreme Value Theory) modelling nay be the best to analyze extreme values. The Extreme Value Theory is widely used in practice and, especially in financal markets, EVT modelling is getting popular to analyBe the effects of extreme risks. This study is to review the significance of the Extreme Value Theory in risk management and, focusing on analyzing insurer's risk capital, extreme risk is measured using the real fire loss data and insurer's specific amount of risk capital is figured out to buffer the extreme risk.
A Comparison Study of Survival Regression Models Based on Data Depths
Kim, Jee-Yun ; Hwang, Jin-Soo ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 313~322
DOI : 10.5351/KJAS.2007.20.2.313
Several robust censored depth regression methods are compared under contamination. Park and Hwang(2003) suggested a way to circumvent the censoring issue by incorporating Kaplan-Meier type weight in halfspace regression depth and Park(2003) used a similar technique to simplicial regression depth. Hubert et al. (2001) suggested a high breakdown point regression depth based on projection called rcent. A new method to implement censoring in rcent is suggested and compared with two precedents under various contamination and censoring schemes.
A Portmanteau Test Based on the Discrete Cosine Transform
Oh, Sung-Un ; Cho, Hye-Min ; Yeo, In-Kwon ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 323~332
DOI : 10.5351/KJAS.2007.20.2.323
We present a new type of portmanteau test in the frequency domain which is derived from the discrete cosine transform(DCT). For the stationary time series, DCT coefficients are asymptotically independent and their variances are expressed by linear combinations of autocovariances. The covariance matrix of DCT coefficients for white noises is diagonal matrix whose diagonal elements is the variance of time series. A simple way to test the independence of time series is that we divide DCT coefficients into two or three parts and then compare sample variances. We also do this by testing the slope in the linear regression model of which the response variables are absolute values or squares of coefficients. Simulation results show that the proposed tests has much higher powers than Ljung-Box test in most cases of our experiments.
The Approximation for the Auxiliary Renewal Function
Bae, Jong-Ho ; Kim, Sung-Gon ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 333~343
DOI : 10.5351/KJAS.2007.20.2.333
The auxiliary renewal function has an important role in analyzng queues in which the either of the inter-arrival time and the service time of customers is not exponential. As like the renewal function, the auxiliary renewal function is hard to compute although it can be defined theoretically. In this paper, we suggest two approximations for auxiliary renewal function and compare the two with the true value of auxiliary renewal function which can be computed in some special cases.
Visualizing (X,Y) Data by Partial Least Squares Method
Huh, Myung-Hoe ; Lee, Yong-Goo ; Yi, Seong-Keun ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 345~355
DOI : 10.5351/KJAS.2007.20.2.345
PLS methods are suited for regressing q-variate Y variables on p-variate X variables even in the presence of multicollinearity problem among X variables. Consequently, they are useful for analyzing datasets with smaller number of observations compared to the number of variables, such as NIR(near-infrared) spectroscopy data in chemometrics. In this study, we propose two visualizing methods of p-variate X variables and q-variate Y variable that can be used in connection with PLS analysis.
Performance Comparison of Cumulative Incidence Estimators in the Presence of Competing Risks
Kim, Dong-Uk ; Ahn, Chi-Kyung ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 357~371
DOI : 10.5351/KJAS.2007.20.2.357
For the time-to-failure data with competing risks, cumulative incidence functions (CIFs) are commonly estimated using nonparametric methods. If the cases of events due to the cause of primary interest are infrequent relative to other cause of failure, nonparametric methods may result in rather imprecise estimates for CIF. In such cases, Bryant et al. (2004) suggested to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimator. We represented the semiparametric cumulative incidence estimator and extended to the model of Weibull and log-normal distribution. We also conducted simulations to access the performance of the semiparametric cumulative incidence estimators and to investigate the impact of model misspecification in log-normal cause-specific hazard model.
G-Inverse and SAS IML for Parameter Estimation in General Linear Model
Choi, Kuey-Chung ; Kang, Kwan-Joong ; Park, Byung-Jun ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 373~385
DOI : 10.5351/KJAS.2007.20.2.373
The solution of the normal equation arising in a general linear model by the least square methods is not unique in general. Conventionally, SAS IML and G-inverse matrices are considered for such problems. In this paper, we provide a systematic solution procedures for SAS IML.
A Study on the Use of Cluster Analysis for Multivariate and Multipurpose Stratification
Park, Jin-Woo ; Yun, Seok-Hoon ; Kim, Jin-Heum ; Jeong, Hyeong-Chul ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 387~394
DOI : 10.5351/KJAS.2007.20.2.387
This paper considers several stratification strategies for multivariate and multipurpose survey with several quantitative stratification variables. We propose three methods of stratification based on, respectively, the method of cumulative frequency square root which is the most popular one in univariate stratification, cluster analysis, and factor analysis followed by cluster analysis. We then compare the efficiency of those methods using the Dong-Eup-Myun data of the holding numbers of farming machines, extracted from the 2001 Agricultural Census. It turned out that the method based on cluster analysis with factor analysis would be a relatively satisfactory strategy.
A Graphical Method for Evaluating the Effect of Outliers in One- and Two-Variate Data
Jang, Dae-Heung ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 395~407
DOI : 10.5351/KJAS.2007.20.2.395
Outliers distort many measures for data analysis. We can propose dandelion seed plot as a graphical tool for evaluating the effect of outliers in one-and two-variate data. We can draw mean-variance dandelion seed plots using linked curves which are made by changing weights from 1 to 0 for each datum. Similarly we can also draw covariance-correlation-coefficient dandelion seed plots. This graphical method can be a useful tool for elementary statistics education in college.
Comparison of Functions for Filtering Time Course Gene Expression Data with Flat Patterns
Kim, Kyung-Sook ; Oh, Mi-Ra ; Baek, Jang-Sun ; Son, Young-Sook ;
Korean Journal of Applied Statistics, volume 20, issue 2, 2007, Pages 409~422
DOI : 10.5351/KJAS.2007.20.2.409
Filtering genes that do not appear to contribute to regulation prior to the statistical analysis of time course gene expression data can reduce the dimensions of data and the possibility of misinterpretation due to noise or lack of variation. In this paper, we compare six different functions for filtering genes with flat pattern under the percentile criterion on an observed sample and that on a bootstrap sample. The result of applying to the yeast cell cycle data shows that the variance function is most similar in both samples.