• Title/Summary/Keyword: subset selection

Search Result 203, Processing Time 0.027 seconds

Ensemble variable selection using genetic algorithm

  • Seogyoung, Lee;Martin Seunghwan, Yang;Jongkyeong, Kang;Seung Jun, Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.629-640
    • /
    • 2022
  • Variable selection is one of the most crucial tasks in supervised learning, such as regression and classification. The best subset selection is straightforward and optimal but not practically applicable unless the number of predictors is small. In this article, we propose directly solving the best subset selection via the genetic algorithm (GA), a popular stochastic optimization algorithm based on the principle of Darwinian evolution. To further improve the variable selection performance, we propose to run multiple GA to solve the best subset selection and then synthesize the results, which we call ensemble GA (EGA). The EGA significantly improves variable selection performance. In addition, the proposed method is essentially the best subset selection and hence applicable to a variety of models with different selection criteria. We compare the proposed EGA to existing variable selection methods under various models, including linear regression, Poisson regression, and Cox regression for survival data. Both simulation and real data analysis demonstrate the promising performance of the proposed method.

Self-adaptive and Bidirectional Dynamic Subset Selection Algorithm for Digital Image Correlation

  • Zhang, Wenzhuo;Zhou, Rong;Zou, Yuanwen
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.305-320
    • /
    • 2017
  • The selection of subset size is of great importance to the accuracy of digital image correlation (DIC). In the traditional DIC, a constant subset size is used for computing the entire image, which overlooks the differences among local speckle patterns of the image. Besides, it is very laborious to find the optimal global subset size of a speckle image. In this paper, a self-adaptive and bidirectional dynamic subset selection (SBDSS) algorithm is proposed to make the subset sizes vary according to their local speckle patterns, which ensures that every subset size is suitable and optimal. The sum of subset intensity variation (${\eta}$) is defined as the assessment criterion to quantify the subset information. Both the threshold and initial guess of subset size in the SBDSS algorithm are self-adaptive to different images. To analyze the performance of the proposed algorithm, both numerical and laboratory experiments were performed. In the numerical experiments, images with different speckle distribution, different deformation and noise were calculated by both the traditional DIC and the proposed algorithm. The results demonstrate that the proposed algorithm achieves higher accuracy than the traditional DIC. Laboratory experiments performed on a substrate also demonstrate that the proposed algorithm is effective in selecting appropriate subset size for each point.

Feature Subset Selection Algorithm based on Entropy (엔트로피를 기반으로 한 특징 집합 선택 알고리즘)

  • 홍석미;안종일;정태충
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.87-94
    • /
    • 2004
  • The feature subset selection is used as a preprocessing step of a teaming algorithm. If collected data are irrelevant or redundant information, we can improve the performance of learning by removing these data before creating of the learning model. The feature subset selection can also reduce the search space and the storage requirement. This paper proposed a new feature subset selection algorithm that is using the heuristic function based on entropy to evaluate the performance of the abstracted feature subset and feature selection. The ACS algorithm was used as a search method. We could decrease a size of learning model and unnecessary calculating time by reducing the dimension of the feature that was used for learning.

Variable Selection Based on Mutual Information

  • Huh, Moon-Y.;Choi, Byong-Su
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.143-155
    • /
    • 2009
  • Best subset selection procedure based on mutual information (MI) between a set of explanatory variables and a dependent class variable is suggested. Derivation of multivariate MI is based on normal mixtures. Several types of normal mixtures are proposed. Also a best subset selection algorithm is proposed. Four real data sets are employed to demonstrate the efficiency of the proposals.

Subset Selection in the Poisson Models - A Normal Predictors case - (포아송 모형에서의 설명변수 선택문제 - 정규분포 설명변수하에서 -)

  • 박종선
    • The Korean Journal of Applied Statistics
    • /
    • v.11 no.2
    • /
    • pp.247-255
    • /
    • 1998
  • In this paper, a new subset selection problem in the Poisson model is considered under the normal predictors. It turns out that the subset model has bigger valiance than that of the Poisson model with random predictors and this has been used to derive new subset selection method similar to Mallows'$C_p$.

  • PDF

A Robust Subset Selection Procedure for Location Parameter Based on Hodges-Lehmann Estimators

  • Lee, Kang Sup
    • Journal of Korean Society for Quality Management
    • /
    • v.19 no.1
    • /
    • pp.51-64
    • /
    • 1991
  • This paper deals with a robust subset selection procedure based on Hodges-Lehmann estimators of location parameters. An improved formula for the estimated standard error of Hodges-Lehmann estimators is considered. Also, the degrees of freedom of the studentized Hodges-Lehmann estimators are investigated and it is suggested to use 0.8n instead of n-1. The proposed procedure is compared with the other subset selection procedures and it is shown to have good effciency for heavy-tailed distributions.

  • PDF

Feature Selection Method by Information Theory and Particle S warm Optimization (상호정보량과 Binary Particle Swarm Optimization을 이용한 속성선택 기법)

  • Cho, Jae-Hoon;Lee, Dae-Jong;Song, Chang-Kyu;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.191-196
    • /
    • 2009
  • In this paper, we proposed a feature selection method using Binary Particle Swarm Optimization(BPSO) and Mutual information. This proposed method consists of the feature selection part for selecting candidate feature subset by mutual information and the optimal feature selection part for choosing optimal feature subset by BPSO in the candidate feature subsets. In the candidate feature selection part, we computed the mutual information of all features, respectively and selected a candidate feature subset by the ranking of mutual information. In the optimal feature selection part, optimal feature subset can be found by BPSO in the candidate feature subset. In the BPSO process, we used multi-object function to optimize both accuracy of classifier and selected feature subset size. DNA expression dataset are used for estimating the performance of the proposed method. Experimental results show that this method can achieve better performance for pattern recognition problems than conventional ones.

ModifiedFAST: A New Optimal Feature Subset Selection Algorithm

  • Nagpal, Arpita;Gaur, Deepti
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.113-122
    • /
    • 2015
  • Feature subset selection is as a pre-processing step in learning algorithms. In this paper, we propose an efficient algorithm, ModifiedFAST, for feature subset selection. This algorithm is suitable for text datasets, and uses the concept of information gain to remove irrelevant and redundant features. A new optimal value of the threshold for symmetric uncertainty, used to identify relevant features, is found. The thresholds used by previous feature selection algorithms such as FAST, Relief, and CFS were not optimal. It has been proven that the threshold value greatly affects the percentage of selected features and the classification accuracy. A new performance unified metric that combines accuracy and the number of features selected has been proposed and applied in the proposed algorithm. It was experimentally shown that the percentage of selected features obtained by the proposed algorithm was lower than that obtained using existing algorithms in most of the datasets. The effectiveness of our algorithm on the optimal threshold was statistically validated with other algorithms.

A Bayesian Variable Selection Method for Binary Response Probit Regression

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.2
    • /
    • pp.167-182
    • /
    • 1999
  • This article is concerned with the selection of subsets of predictor variables to be included in building the binary response probit regression model. It is based on a Bayesian approach, intended to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure reformulates the probit regression setup in a hierarchical normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. The appropriate posterior probability of each subset of predictor variables is obtained through the Gibbs sampler, which samples indirectly from the multinomial posterior distribution on the set of possible subset choices. Thus, in this procedure, the most promising subset of predictors can be identified as the one with highest posterior probability. To highlight the merit of this procedure a couple of illustrative numerical examples are given.

  • PDF

Subset selection in multiple linear regression: An improved Tabu search

  • Bae, Jaegug;Kim, Jung-Tae;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.2
    • /
    • pp.138-145
    • /
    • 2016
  • This paper proposes an improved tabu search method for subset selection in multiple linear regression models. Variable selection is a vital combinatorial optimization problem in multivariate statistics. The selection of the optimal subset of variables is necessary in order to reliably construct a multiple linear regression model. Its applications widely range from machine learning, timeseries prediction, and multi-class classification to noise detection. Since this problem has NP-complete nature, it becomes more difficult to find the optimal solution as the number of variables increases. Two typical metaheuristic methods have been developed to tackle the problem: the tabu search algorithm and hybrid genetic and simulated annealing algorithm. However, these two methods have shortcomings. The tabu search method requires a large amount of computing time, and the hybrid algorithm produces a less accurate solution. To overcome the shortcomings of these methods, we propose an improved tabu search algorithm to reduce moves of the neighborhood and to adopt an effective move search strategy. To evaluate the performance of the proposed method, comparative studies are performed on small literature data sets and on large simulation data sets. Computational results show that the proposed method outperforms two metaheuristic methods in terms of the computing time and solution quality.