Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Korean Journal of Applied Statistics
Journal Basic Information
Journal DOI :
The Korean Statistical Society
Editor in Chief :
Volume & Issues
Volume 23, Issue 6 - Dec 2010
Volume 23, Issue 5 - Oct 2010
Volume 23, Issue 4 - Aug 2010
Volume 23, Issue 3 - Jun 2010
Volume 23, Issue 2 - Apr 2010
Volume 23, Issue 1 - Feb 2010
Selecting the target year
Technical Improvements of the Projection of Household Health Care Expenditure
Rho, Sang-Youn ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 1~11
DOI : 10.5351/KJAS.2010.23.1.001
This study aims to improve the more confident and efficient projection method that is to estimate the Number of Household per Family scales(NHF) in projecting the Household Heath care Expenditure(HHE). For this purpose, this paper suggested three results of the research. First, because projecting the NHF does not reflect the recent socio-demographic trends in the process of projecting the National Health Expenditure(NHE),the prior projection results have serious problem in the confidence and political availability. Second, the projection results about the HHE might be underestimated relative to the real one. Third, in order to estimate the more confident and efficient estimates of the HHE, the estimated NHF reflecting the socio-demographic trend must be used to project the one. There is an alternative method that the NHF and the increasing or decreasing rate of them which are regularly surveyed and suggested by the KOSIS should be used to project the process.
Optimal Thresholds from Mixture Distributions
Hong, Chong-Sun ; Joo, Jae-Seon ; Choi, Jin-Soo ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 13~28
DOI : 10.5351/KJAS.2010.23.1.013
Assuming a mixture distribution for credit evaluation studies, we discuss estimating threshold methods to minimize errors that default borrowers are predicted as non defaults or non defaults are regarded as defaults. A method by using statistical hypotheses tests, the most powerful test and generalized likelihood ratio test, for the probability density functions which are defined with the score random variable and the parameter space consisted of only two elements such as the default and non default states is proposed to estimate a threshold. And anther optimal thresholds to maximize classification accuracy measures of the accuracy and the true rate for ROC and CAP curves are estimated as equations related with these probability density functions. Three kinds of optimal thresholds in terms of the hypotheses testing, the accuracy and the true rate are obtained from normal random samples with various means and variances. The sums of the type I and type II errors corresponding to each optimal threshold are obtained and compared. Finally we discuss about their efficiency and derive conclusions.
The With-Profits Strategies for Life Insurance Companies -Focused on the Case and Empirical Analysis of Life Insurance Companies in the UK-
Jung, Se-Chang ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 29~39
DOI : 10.5351/KJAS.2010.23.1.029
The purpose of this paper is to analyse the advantages of with-profits and make a proposal for invigorating with-profits business. The data of life insurance companies in the U.K. is used and correlation and regression are employed. The implications are drawn from the analysis. The results and implications of this paper are summarized as follows. Firstly, the with-profits policies increase premium income. There is no positive relationship between with-profits policies and operating costs. The companies that are financially sound sell more with-profits policies than those not solvent. Secondly, with regard to implications for insurance companies, they can make full use of with-profits policies for marketing purpose and the main product in the product portfolio. Finally, with regard to implications for the policyholders, the with-profits policies are not expensive by comparison with the without-profits policies. The with-profits policies provide benefit to the policyholders on a solvency basis.
A Study on Outlier Detection Method for Financial Time Series Data
Ha, M.H. ; Kim, S. ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 41~47
DOI : 10.5351/KJAS.2010.23.1.041
In this paper, we show the performance evaluation of outlier detection methods based on the GARCH model. We first introduce GARCH model and the methods of outlier detection in the GARCH model. The results of small simulation and the real KOSPI data show the out-performance of the outlier detection method over the traditional method in the GARCH model.
Evidence of Taylor Property in Absolute-Value-GARCH Processes for Korean Financial Time Series
Baek, J.S. ; Hwang, S.Y. ; Choi, M.S. ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 49~61
DOI : 10.5351/KJAS.2010.23.1.049
The time series dependencies of Financial volatility are frequently measured by the autocorrelation function of power-transformed absolute returns. It is known as the Taylor property that the autocorrelations of the absolute returns are larger than those of the squared returns. Hass (2009) developed a simple method for detecting the Taylor property in absolute-value-GAROH(1,1) (AVGAROH(1,1)) model. In this article, we fitted AVGAROH(1,1) model for various Korean financial time series and observed the Taylor property.
The Analysis of Factors which Affect Business Survey Index Using Regression Trees
Chang, Young-Jae ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 63~71
DOI : 10.5351/KJAS.2010.23.1.063
Business entrepreneurs reflect their views of domestic and foreign economic activities on their operation for the growth of their business. The decision, forecasting, and planning based on their economic sentiment affect business operation such as production, investment, and hiring and consequently affect condition of national economy. Business survey index(BSI) is compiled to get the information of business entrepreneurs' economic sentiment for the analysis of business condition. BSI has been used as an important variable in the short-term forecasting models for business cycle analysis, especially during the the period of extreme business fluctuations. Recent financial crisis has arised extreme business fluctuations similar to those caused by currency crisis at the end of 1997, and brought back the importance of BSI as a variable for the economic forecasting. In this paper, the meaning of BSI as an economic sentiment index is reviewed and a GUIDE regression tree is constructed to find out the factors which affect on BSI. The result shows that the variables related to the stability of financial market such as kospi index(Korea composite stock price index) and exchange rate as well as manufacturing operation ratio and consumer goods sales are main factors which affect business entrepreneurs' economic sentiment.
Nonstationary Time Series and Missing Data
Shin, Dong-Wan ; Lee, Oe-Sook ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 73~79
DOI : 10.5351/KJAS.2010.23.1.073
Missing values for unit root processes are imputed by the most recent observations. Treating the imputed observations as if they are complete ones, semiparametric unit root tests are extended to missing value situations. Also, an invariance principle for the partial sum process of the imputed observations is established under some mild conditions, which shows that the extended tests have the same limiting null distributions as those based on complete observations. The proposed tests are illustrated by analyzing an unequally spaced real data set.
The Monetary Approach to Exchange Rate Determination for Korea
Han, Kyue-Sook ; Oh, Yu-Jin ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 81~93
DOI : 10.5351/KJAS.2010.23.1.081
Korea experienced a financial crisis in 1997. Since then Korea economy has undergone severe change such as exchange rate regime from the market average exchange rate system to the free floating exchange rate system in 1997, and the currency rate fluctuation has been widening. We empirically analyze the determination of the Won/Dollar exchange rate based on the monetary approach. We employ Lucas (1982), Bilson (1978) and Frankel (1979) models and consider some mixed models. We make use of monthly data of money supply, income, interest rate, capital balance, terms of trade, and the yen/dollar exchange rate over the period 1990-2009. We compare the empirical results of cointegration tests and the vector error correction model(VECM) from the two regimes, the pre and post korean financial crisis. The won/dollar exchange rate has long-run relationship with the variables in the monetarist models in the two regimes. For the post crisis regime, the Bilson model is the best and the long run variables also affect the short run dynamics of the won/dollar exchange rate.
A Review of Genetic Association Analyses in Population and Family Based Data: Methods and Software
Lee, Hyo-Jung ; Kim, Min-Ji ; Park, Mi-Ra ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 95~111
DOI : 10.5351/KJAS.2010.23.1.095
Recently, there have been lots of study for disease-genetic association using SNPs and haplotypes. Statistical methods and tools for various types of data are developed by many researchers. However, there is no unified software which can handle most of major analysis, and the methods and manners to deal with data are quite different through softwares. And thus it is not easy to researcher to choose proper software. In this study, we devide analyzing procedures into three steps: preliminary analysis, population-based analysis and family-based analysis. We review the statistical methods for each step and compare the features of the FBAT, SAS/Genetics, SAGE and R as major integrating softwares for genetic study.
Interpreting Mixtures Using Allele Peak Areas
Hong, Yu-Lim ; Lee, Hyo-Jung ; Lee, Jae-Won ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 113~121
DOI : 10.5351/KJAS.2010.23.1.113
Mixture is that DNA profiles of samples contain material from more than one contributor, especially common in rape cases. In this situation, first, the method based on enumerating a complete set of possible genotype that may have generated the mixed DNA profile have been studied for interpreting DNA mixtures. More recently, the methods utilizing peak area information to calculate likelihood ratios have been suggested. This study is concerned with the analysis and interpretation of mixed forensic stains using quantitative peak area information and the method of forensic inference for extension of material from more than or equal to three contributors. Finally, the numerical example will be outlined.
Analysis of Daily Distress Symptoms: Threshold Estimation after Isolating the Distress Group
Lee, Won-Nyung ; Song, Hae-Hiang ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 123~138
DOI : 10.5351/KJAS.2010.23.1.123
After selecting a group of women with premenstrual syndrome based on daily distress scores of 28 days, one needs to estimate threshold for the change of symptoms, which would be useful for the clinician's diagnosis in hospitals. However, a test of whether a change has occurred has to precede the estimation of the threshold. In this paper, we apply parametric and nonparametric testing methods to an example data obtained from a group of women. Nonparametric method does not assume any distributional form of distress scores and parametric testing method is based on the normal distributions of linear regression lines. Therefore, the optimal situation of both methods would be different and we will assess it with a simulation study.
Individual Bioequivalence Tests under 3 X 2 Design
Jung, Gyu-Jin ; Lim, Nam-Kyoo ; Park, Sang-Gue ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 139~150
DOI : 10.5351/KJAS.2010.23.1.139
In recent years, more generic drug products became available. The current regulation for assessing the bioequivalence of two drug formulations is based on the concept of average bioequivalence. This approach has been indicated to be insufficient for assessing switchability between two drug formulations and US FDA has adopted individual bioequivalence as one of the bioequivalence criterion since 2001. The US FDA recommends that individual bioequivalence be assessed based on
crossover design, while a
crossover design may be used as an alternative design to reduce the length and cost of the study. In this paper, a statistical procedure for assessment of individual bioequivalence under
crossover designs is proposed and some statistical points are discussed with
crossover design and
extra-reference design through simulation studies.
Support Vector Machine and Improved Adaptive Median Filtering for Impulse Noise Removal from Images
Lee, Dae-Geun ; Park, Min-Jae ; Kim, Jeong-Uk ; Kim, Do-Yoon ; Kim, Dong-Wook ; Lim, Dong-Hoon ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 151~165
DOI : 10.5351/KJAS.2010.23.1.151
Images are often corrupted by impulse noise due to a noise sensor or channel transmission errors. The filter based on SVM(Support Vector Machine) and the improved adaptive median filtering is proposed to preserve image details while suppressing impulse noise for image restoration. Our approach uses an SVM impulse detector to judge whether the input pixel is noise. If a pixel is detected as a noisy pixel, the improved adaptive median filter is used to replace it. To demonstrate the performance of the proposed filter, extensive simulation experiments have been conducted under both salt-and-pepper and random-valued impulse noise models to compare our method with many other well known filters in the qualitative measure and quantitative measures such as PSNR and MAE. Experimental results indicate that the proposed filter performs significantly better than many other existing filters.
Graphical Methods for Evaluating Supersaturated Designs
Kim, Youn-Gil ; Jang, Dae-Heung ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 167~178
DOI : 10.5351/KJAS.2010.23.1.167
The orthogonality is an important property in the experimental designs. We usually use supersaturated designs in case of large factors and small runs. These supersaturated designs do not satisfy the orthogonality. Hence, we need the means for the evaluation of the degree of the orthogonality of given supersaturated designs. We usually use the numerical measures as the means for evaluating the degree of the orthogonality of given supersaturated designs. We can use the graphical methods for evaluating the degree of the orthogonality of given supersaturated designs.
Clustering Red Wines Using a Miniature Spectrometer of Filter-Array with a Cypress RGB Light Source
Choi, Kyung-Mee ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 179~187
DOI : 10.5351/KJAS.2010.23.1.179
Miniature spectrometers can be applied for various purposes in wide areas. This paper shows how a wellmade spectrometer on-a-chip of a low performance and low-cost filter-array can be used for recognizing types of red wine. Light spectra are processed through a filter-array of a spectrometer after they have passed through the wine in the cuvettes. Without recovering the original target spectrum, pattern recognition methods are introduced to detect the types of wine. A wavelength cross-correlation turns out to be a good distance metric among spectra because it captures their simultaneous movements and it is affine invariant. Consequently, a well-designed spectrometer is reliability in terms of its repeatability.
Randomizing Sequences of Finite Length
Huh, Myung-Hoe ; Lee, Yong-Goo ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 189~196
DOI : 10.5351/KJAS.2010.23.1.189
It is never an easy task to physically randomize the sequence of cards. For instance, US 1970 draft lottery resulted in a social turmoil since the outcome sequence of 366 birthday numbers showed a significant relationship with the input order (Wikipedia, "Draft Lottery 1969", Retrieved 2009/05/01). We are motivated by Laplace's 1825 book titled Philosophical Essay on Probabilities that says "Suppose that the numbers 1, 2, ..., 100 are placed, according to their natural ordering, in an urn, and suppose further that, after having shaken the urn, to shuffle the numbers, one draws one number. It is clear that if the shuffling has been properly done, each number will have the same chance of being drawn. But if we fear that there are small differences between them depending on the order in which the numbers were put into the urn, we can decrease these differences considerably by placing these numbers in a second urn in the order in which they are drawn from the first urn, and then shaking the second urn to shuffle the numbers. These differences, already imperceptible in the second urn, would be diminished more and more by using a third urn, a fourth urn, &c." (translated by Andrew 1. Dale, 1995, Springer. pp. 35-36). Laplace foresaw what would happen to us in 150 years later, and, even more, suggested the possible tool to handle the problem. But he did omit the detailed arguments for the solution. Thus we would like to write the supplement in modern terms for Laplace in this research note. We formulate the problem with a lottery box model, to which Markov chain theory can be applied. By applying Markov chains repeatedly, one expects the uniform distribution on k states as stationary distribution. Additionally, we show that the probability of even-number of successes in binomial distribution with trials and the success probability
approaches to 0.5, as n increases to infinity. Our theory is illustrated to the cases of truncated geometric distribution and the US 1970 draft lottery.
Boosting Algorithms for Large-Scale Data and Data Batch Stream
Yoon, Young-Joo ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 197~206
DOI : 10.5351/KJAS.2010.23.1.197
In this paper, we propose boosting algorithms when data are very large or coming in batches sequentially over time. In this situation, ordinary boosting algorithm may be inappropriate because it requires the availability of all of the training set at once. To apply to large scale data or data batch stream, we modify the AdaBoost and Arc-x4. These algorithms have good results for both large scale data and data batch stream with or without concept drift on simulated data and real data sets.
A Study on Applying Shrinkage Method in Generalized Additive Model
Ki, Seung-Do ; Kang, Kee-Hoon ;
Korean Journal of Applied Statistics, volume 23, issue 1, 2010, Pages 207~218
DOI : 10.5351/KJAS.2010.23.1.207
Generalized additive model(GAM) is the statistical model that resolves most of the problems existing in the traditional linear regression model. However, overfitting phenomenon can be aroused without applying any method to reduce the number of independent variables. Therefore, variable selection methods in generalized additive model are needed. Recently, Lasso related methods are popular for variable selection in regression analysis. In this research, we consider Group Lasso and Elastic net models for variable selection in GAM and propose an algorithm for finding solutions. We compare the proposed methods via Monte Carlo simulation and applying auto insurance data in the fiscal year 2005. lt is shown that the proposed methods result in the better performance.