• Title/Summary/Keyword: variable selection

Search Result 873, Processing Time 0.032 seconds

Validation Comparison of Credit Rating Models Using Box-Cox Transformation

  • Hong, Chong-Sun;Choi, Jeong-Min
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.3
    • /
    • pp.789-800
    • /
    • 2008
  • Current credit evaluation models based on financial data make use of smoothing estimated default ratios which are transformed from each financial variable. In this work, some problems of the credit evaluation models developed by financial experts are discussed and we propose improved credit evaluation models based on the stepwise variable selection method and Box-Cox transformed data whose distribution is much skewed to the right. After comparing goodness-of-fit tests of these models, the validation of the credit evaluation models using statistical methods such as the stepwise variable selection method and Box-Cox transformation function is explained.

  • PDF

A Study on Auxiliary Variable Selection in Unit Nonresponse Calibration (단위 무응답 보정에서 보조변수의 선택에 관한 연구)

  • 손창균;홍기학;이기성
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.33-44
    • /
    • 2003
  • Typically, it should be use auxiliary variable for calibrating the survey nonreponse in census or sampling survey. Where, if the dimension of auxiliary information is large, then it nay be spend a lot of computing time, and difficult to handle data set. Also because the variance estimator depends on the dimension of auxiliary variables, the variance estimator becomes underestimator. To deal with this problem, we propose the variable selection methods for calibration estimation procedure in unit nonreponse situation and we compare the efficiency by simulation study.

Variable selection in censored kernel regression

  • Choi, Kook-Lyeol;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.201-209
    • /
    • 2013
  • For censored regression, it is often the case that some input variables are not important, while some input variables are more important than others. We propose a novel algorithm for selecting such important input variables for censored kernel regression, which is based on the penalized regression with the weighted quadratic loss function for the censored data, where the weight is computed from the empirical survival function of the censoring variable. We employ the weighted version of ANOVA decomposition kernels to choose optimal subset of important input variables. Experimental results are then presented which indicate the performance of the proposed variable selection method.

Variable Selection Criteria in Regression

  • Kim, Choong-Rak
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.2
    • /
    • pp.293-301
    • /
    • 1994
  • In this paper we propose a variable selection criterion minimizing influence curve in regression, and compare it with other criteria such as $C_p$(Mallows 1973) and adjusted coefficient of determination. Examples and extension to the generalized linear models are given.

  • PDF

Fast Decoder Algorithm Using Hybrid Beam Search and Variable Flooring for Large Vocabulary Speech Recognition (대용량 음성인식을 위한 하이브리드 빔 탐색 방법과 가변 플로링 기법을 이용한 고속 디코더 알고리듬 연구)

  • Kim, Yong-Min;Kim, Jin-Young;Kim, Dong-Hwa;Kwon, Oh-Il
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.17-33
    • /
    • 2001
  • In this paper, we implement the large variable vocabulary speech recognition system, which is characterized by no additional pre-training process and no limitation of recognized word list. We have designed the system in order to achieve the high recognition rate using the decision tree based state tying algorithm and in order to reduce the processing time using the gaussian selection based variable flooring algorithm, the limitation algorithm of the number of nodes and ENNS algorithm. The gaussian selection based variable flooring algorithm shows that it can reduce the total processing time by more than half of the recognition time, but it brings about the reduction of recognition rate. In other words, there is a trade off between the recognition rate and the processing time. The limitation algorithm of the number of nodes shows the best performance when the number of gaussian mixtures is a three. Both of the off-line and on-line experiments show the same performance. In our experiments, there are some differences of the recognition rate and the average recognition time according to the distinction of genders, speakers, and the number of vocabulary.

  • PDF

Effect of Different Variable Selection and Estimation Methods on Performance of Fault Diagnosis (이상진단 성능에 미치는 변수선택과 추정방법의 영향)

  • Cho, Hyun-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.551-557
    • /
    • 2019
  • Diagnosis of abnormal faults is essential for producing high quality products. The role of real-time diagnosis is quite increasing in the batch processes of producing high value-added products such as semiconductors, pharmaceuticals, and so forth. In this study, we evaluate the effect of variable selection and future-value estimation techniques on the performance of the diagnosis system, which is based on nonlinear classification and measurement data. The diagnostic performance can be improved by selecting only the variables that are important and have high contribution for diagnosis. Thus, the diagnostic performance of several variable selection techniques is compared and evaluated. In addition, missing data of a new batch, called future observations, should be estimated because the full data of a new batch is not available before the end of the cycle. In this work the use of different estimation techniques is analyzed. A case study on the polyvinyl chloride batch process was carried out so that optimal variable selection and estimation methods were obtained: maximum 21.9% and 13.3% improvement by variable selection and maximum 25.8% and 15.2% improvement by estimation methods.

Multivariate Procedure for Variable Selection and Classification of High Dimensional Heterogeneous Data

  • Mehmood, Tahir;Rasheed, Zahid
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.6
    • /
    • pp.575-587
    • /
    • 2015
  • The development in data collection techniques results in high dimensional data sets, where discrimination is an important and commonly encountered problem that are crucial to resolve when high dimensional data is heterogeneous (non-common variance covariance structure for classes). An example of this is to classify microbial habitat preferences based on codon/bi-codon usage. Habitat preference is important to study for evolutionary genetic relationships and may help industry produce specific enzymes. Most classification procedures assume homogeneity (common variance covariance structure for all classes), which is not guaranteed in most high dimensional data sets. We have introduced regularized elimination in partial least square coupled with QDA (rePLS-QDA) for the parsimonious variable selection and classification of high dimensional heterogeneous data sets based on recently introduced regularized elimination for variable selection in partial least square (rePLS) and heterogeneous classification procedure quadratic discriminant analysis (QDA). A comparison of proposed and existing methods is conducted over the simulated data set; in addition, the proposed procedure is implemented to classify microbial habitat preferences by their codon/bi-codon usage. Five bacterial habitats (Aquatic, Host Associated, Multiple, Specialized and Terrestrial) are modeled. The classification accuracy of each habitat is satisfactory and ranges from 89.1% to 100% on test data. Interesting codon/bi-codons usage, their mutual interactions influential for respective habitat preference are identified. The proposed method also produced results that concurred with known biological characteristics that will help researchers better understand divergence of species.

Fast robust variable selection using VIF regression in large datasets (대형 데이터에서 VIF회귀를 이용한 신속 강건 변수선택법)

  • Seo, Han Son
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.4
    • /
    • pp.463-473
    • /
    • 2018
  • Variable selection algorithms for linear regression models of large data are considered. Many algorithms are proposed focusing on the speed and the robustness of algorithms. Among them variance inflation factor (VIF) regression is fast and accurate due to the use of a streamwise regression approach. But a VIF regression is susceptible to outliers because it estimates a model by a least-square method. A robust criterion using a weighted estimator has been proposed for the robustness of algorithm; in addition, a robust VIF regression has also been proposed for the same purpose. In this article a fast and robust variable selection method is suggested via a VIF regression with detecting and removing potential outliers. A simulation study and an analysis of a dataset are conducted to compare the suggested method with other methods.

On variable bandwidth Kernel Regression Estimation (변수평활량을 이용한 커널회귀함수 추정)

  • Seog, Kyung-Ha;Chung, Sung-Suk;Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.179-188
    • /
    • 1998
  • Local polynomial regression estimation is the most popular one among kernel type regression estimator. In local polynomial regression function esimation bandwidth selection is crucial problem like the kernel estimation. When the regression curve has complicated structure variable bandwidth selection will be appropriate. In this paper, we propose a variable bandwidth selection method fully data driven. We will choose the bandwdith by selecting minimising estiamted MSE which is estimated by the pilot bandwidth study via croos-validation method. Monte carlo simulation was conducted in order to show the superiority of proposed bandwidth selection method.

  • PDF

Comparing Classification Accuracy of Ensemble and Clustering Algorithms Based on Taguchi Design (다구찌 디자인을 이용한 앙상블 및 군집분석 분류 성능 비교)

  • Shin, Hyung-Won;Sohn, So-Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.27 no.1
    • /
    • pp.47-53
    • /
    • 2001
  • In this paper, we compare the classification performances of both ensemble and clustering algorithms (Data Bagging, Variable Selection Bagging, Parameter Combining, Clustering) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are (1) correlation among input variables (2) variance of observation (3) training data size and (4) input-output function. In view of the unknown relationship between input and output function, we use a Taguchi design to improve the practicality of our study results by letting it as a noise factor. Experimental study results indicate the following: When the level of the variance is medium, Bagging & Parameter Combining performs worse than Logistic Regression, Variable Selection Bagging and Clustering. However, classification performances of Logistic Regression, Variable Selection Bagging, Bagging and Clustering are not significantly different when the variance of input data is either small or large. When there is strong correlation in input variables, Variable Selection Bagging outperforms both Logistic Regression and Parameter combining. In general, Parameter Combining algorithm appears to be the worst at our disappointment.

  • PDF