Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Korean Journal of Applied Statistics
Journal Basic Information
Journal DOI :
The Korean Statistical Society
Editor in Chief :
Volume & Issues
Volume 21, Issue 6 - Dec 2008
Volume 21, Issue 5 - Oct 2008
Volume 21, Issue 4 - Aug 2008
Volume 21, Issue 3 - Jun 2008
Volume 21, Issue 2 - Apr 2008
Volume 21, Issue 1 - Feb 2008
Selecting the target year
The Robust Estimation Method for Analyzing the Financial Time Series Data
Kim, S. ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 561~569
DOI : 10.5351/KJAS.2008.21.4.561
In this paper, we propose the double robust estimators which are the solutions of the double robust estimating equations to analyze and treat the outliers in the stock market data in Korea including the IMF period. The feasibility study shows that the proposed estimators work quitely better than the least squares estimators and the conventional robust estimators.
Principal Components Regression in Logistic Model
Kim, Bu-Yong ; Kahng, Myung-Wook ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 571~580
DOI : 10.5351/KJAS.2008.21.4.571
The logistic regression analysis is widely used in the area of customer relationship management and credit risk management. It is well known that the maximum likelihood estimation is not appropriate when multicollinearity exists among the regressors. Thus we propose the logistic principal components regression to deal with the multicollinearity problem. In particular, new method is suggested to select proper principal components. The selection method is based on the condition index instead of the eigenvalue. When a condition index is larger than the upper limit of cutoff value, principal component corresponding to the index is removed from the estimation. And hypothesis test is sequentially employed to eliminate the principal component when a condition index is between the upper limit and the lower limit. The limits are obtained by a linear model which is constructed on the basis of the conjoint analysis. The proposed method is evaluated by means of the variance of the estimates and the correct classification rate. The results indicate that the proposed method is superior to the existing method in terms of efficiency and goodness of fit.
A Study of Developing a Relative-Specialization Index Using Expected Frequence
Nam, Ki-Seong ; Oh, Min-Hong ; Hong, Hyun-Guyn ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 581~588
DOI : 10.5351/KJAS.2008.21.4.581
The purpose of this study is to introduce a relative specialization index, the Nam-Oh-Hong Index(NOHI) and to investigate regional distribution of occupational specialization using the newly developed index. Compared with Location Quotient(LQ), the advantages of the index is that the NOHI enables comparison between inter-regional and intra-regional concentration of employment possible at the same time. The results of the specialization analyses show that Seoul is specialized in management, book-keeping and office related occupations, whereas Busan in machine and material related occupations.
Comparison of Multinomial Logit and Logistic Regression on Disability Pensioners` Characteristic
Kim, Mi-Jung ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 589~602
DOI : 10.5351/KJAS.2008.21.4.589
This article studies on disability pensioners` characteristic with multinomial logit and logistic regression model. Seven factors are examined on whether each factor is reflected in degree of disability in the disability pension. By incorporating multinomial logit and logistic regression model, effectiveness and characteristic of the seven factors are investigated on the degree of disability. Result shows all the seven factors are significant on the degree of disability, while among the seven, five factors, age, sex, type of coverage, type of category, insured duration show a trend in degree of disability and the other two, cause of disability and class of standard monthly income are not effective on trend in degree of disability. Results from analyses might be useful for disability pension management.
Bayesian Analysis for the Zero-inflated Regression Models
Jang, Hak-Jin ; Kang, Yun-Hee ; Lee, S. ; Kim, Seong-W. ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 603~613
DOI : 10.5351/KJAS.2008.21.4.603
We often encounter the situation that discrete count data have a large portion of zeros. In this case, it is not appropriate to analyze the data based on standard regression models such as the poisson or negative binomial regression models. In this article, we consider Bayesian analysis for two commonly used models. They are zero-inflated poisson and negative binomial regression models. We use the Bayes factor as a model selection tool and computation is proceeded via Markov chain Monte Carlo methods. Crash count data are analyzed to support theoretical results.
A Note on Sometimes Pooling Rules
Lim, Yong-Bin ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 615~620
DOI : 10.5351/KJAS.2008.21.4.615
In engineering experiments, `Sometimes Pooling Rules` to remove insignificant terms from the model has been implemented to increase the power of detecting the small size of main effects when the preliminary test of higher order interaction effects declare to be insignificant. In this note, we review the sometimes pooling rules in the literature and also study the probability of the length of 95% confidence interval of
of the comparison of two independent samples being shorter than that of the paired comparison at the various level of significance
of the preliminary test and the insufficient number of blocks n in [2,13], given the block effects being pooled to the error term. This study supports that the sometimes pooling results in the power improvement of the main effects.
A Generalized Marginal Logit Model for Repeated Polytomous Response Data
Choi, Jae-Sung ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 621~630
DOI : 10.5351/KJAS.2008.21.4.621
This paper discusses how to construct a generalized marginal logit model for analyzing repeated polytomous response data when some factors are applied to larger experimental units as treatments and time to a smaller experimental unit as a repeated measures factor. So, two different experimental sizes are considered. Weighted least squares(WLS) methods are used for estimating fixed effects in the suggested model.
Accelerated Lifetime Data Analysis Using Quantile Regression
Roh, Chee-Youn ; Kim, Hee-Jeong ; Na, Myung-Hwan ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 631~638
DOI : 10.5351/KJAS.2008.21.4.631
Accelerated Lifetime Test is a method of estimation of lifetime quality characteristics under operation condition with the accelerated lifetime data obtained under accelerated stress. In this paper we propose estimation method with accelerated lifetime data using quantile regression. We apply the method to real data with Arrhenius and Inverse power model.
A Development of a Tailored Follow up Management Model Using the Data Mining Technique on Hypertension
Park, Il-Su ; Yong, Wang-Sik ; Kim, Yu-Mi ; Kang, Sung-Hong ; Han, Jun-Tae ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 639~647
DOI : 10.5351/KJAS.2008.21.4.639
This study used the characteristics of the knowledge discovery and data mining algorithms to develop tailored hypertension follow up management model - hypertension care predictive model and hypertension care compliance segmentation model - for hypertension management using the Korea National Health Insurance Corporation database(the insureds’ screening and health care benefit data). This study validated the predictive power of data mining algorithms by comparing the performance of logistic regression, decision tree, and ensemble technique. On the basis of internal and external validation, it was found that the model performance of logistic regression method was the best among the above three techniques on hypertension care predictive model and hypertension care compliance segmentation model was developed by Decision tree analysis. This study produced several factors affecting the outbreak of hypertension using screening. It is considered to be a contributing factor towards the nation’s building of a Hypertension follow up Management System in the near future by bringing forth representative results on the rise and care of hypertension.
Developing the Index of Foodborne Disease Occurrence
Choi, Kook-Yeol ; Kim, Byung-Soo ; Bae, Wha-Soo ; Jung, Woo-Seok ; Cho, Young-Joon ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 649~658
DOI : 10.5351/KJAS.2008.21.4.649
As the Eating Out Businesses are making rapid progress and most of the schools and the firms serve the meals, the foodborne disease has occurred increasingly and lots of researches and the policies are studied to prevent it. In Korea, the foodborne disease index for prevention is developed by using bacterial growth rate on the temperature to give the information about the danger level of the foodborne disease, but the gap between real status of the occurrences and the predicted danger level has been pointed out. This study aims at developing the index of the foodborne occurrence based on the log linear model using the data of the foodborne disease occurrence and the meteorological data for the last three years(
). Comparison between the new index and the existing index showed that the new index is better in explaining the foodborne disease occurrence.
Application of Multiple Imputation Method in Analyzing Data with Missing Continuous Covariates
Ghasemizadeh Tamar, S. ; Ganjali, M. ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 659~664
DOI : 10.5351/KJAS.2008.21.4.659
Missing continuous covariates are pervasive in the use of generalized linear models for medical data. Multiple imputation is the most common and easy-to-do method of dealing with missing covariate data. However, there are always serious warnings in using this method. There should be concern to make imputed values more proper. In this paper, proper imputation from posterior predictive distribution is developed for implementing with arbitrary priors. We use empirical distribution of the posterior for approximating the posterior predictive distribution, to sample from it. This method is preferable in comparison with a presented imputation method of us which uses a full model to impute missing values using available software. The proposed methods are implemented on glucocorticoid data.
Small Area Estimation via Generalized Estimating Equations and the Panel Analysis of Unemployment Rates
Yeo, In-Kwon ; Son, Kyoung-Jin ; Kim, Young-Won ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 665~674
DOI : 10.5351/KJAS.2008.21.4.665
Most of existing studies about the small area estimation deal with the estimation of parameters based on cross-sectional data. However, since many official statistics are repeatedly collected at a regular interval of time, for instance, monthly, quarterly, or yearly, we need an alternative model which can handle characteristics of these kinds of data. In this paper, we investigate the generalized estimating equation which can model time-dependency among response variables and is useful to analyze repeated measurement or longitudinal data. We compare with the generalized linear model and the generalized estimating equation through the estimation of unemployment rates of 25 areas in Gyeongsangnam-do and Ulsan. The data consist of the status of employment and some covariates from January to December 2005.
A Complex Sampling Design for the Estimation of Korean Livestock Production Cost
Kim, Soo-Taek ; Kim, Young-Won ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 675~694
DOI : 10.5351/KJAS.2008.21.4.675
We propose a new sampling design for the Korean Livestock Production Cost Survey. In this sampling design, the survey population is derived from the 2005’s agricultural census of Korea. And coefficient of variation(CV) is estimated from the current livestock production cost survey data, and the estimated CV’s are used to find the optimal sample size which satisfies the predetermined precision of estimation. In order to save the enumeration cost, the agriculture enumeration districts are used as a primary sampling unit(psu). Final sample is selected by double sampling. Also, we propose the estimator which is able to reflect the change of the population of livestock production households.
Regular Polyprism Parallel Coordinate Plot as a Statistical Graphics Tool
Jang, Dae-Heung ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 695~704
DOI : 10.5351/KJAS.2008.21.4.695
The parallel coordinate plot is a graphical data analysis technique for plotting multivariate data. The parallel coordinate plot overcomes the visualization problem of the Cartesian coordinate system for dimensions greater than 4. But, using different ordering of coordinate axes in the parallel coordinate plot of the same data may make different interpretations. Hence, we can use the regular polyprism parallel coordinate plot as an alternative for overcoming the variable arrangement problem of the parallel coordinate plot.
Detection and Forecast of Climate Change Signal over the Korean Peninsula
Sohn, Keon-Tae ; Lee, Eun-Hye ; Lee, Jeong-Hyeong ;
Korean Journal of Applied Statistics, volume 21, issue 4, 2008, Pages 705~716
DOI : 10.5351/KJAS.2008.21.4.705
The objectives of this study are the detection and forecast of climate change signal in the annual mean of surface temperature data, which are generated by MRI/JMA CGCM over the Korean Peninsula. MRI/JMA CGCM outputs consist of control run data(experiment with no change of
concentration) and scenario run data(
1%/year increase experiment to quadrupling) during 142 years for surface temperature and precipitation. And ECMWF reanalysis data during 43 years are used as observations. All data have the same spatial structure which consists of 42 grid points. Two statistical models, the Bayesian fingerprint method and the regression model with autoregressive error(AUTOREG model), are separately applied to detect the climate change signal. The forecasts up to 2100 are generated by the estimated AUTOREG model only for detected grid points.