• 제목/요약/키워드: selection of features

검색결과 882건 처리시간 0.026초

Comparison of Feature Selection Processes for Image Retrieval Applications

  • Choi, Young-Mee;Choo, Moon-Won
    • 한국멀티미디어학회논문지
    • /
    • 제14권12호
    • /
    • pp.1544-1548
    • /
    • 2011
  • A process of choosing a subset of original features, so called feature selection, is considered as a crucial preprocessing step to image processing applications. There are already large pools of techniques developed for machine learning and data mining fields. In this paper, basically two methods, non-feature selection and feature selection, are investigated to compare their predictive effectiveness of classification. Color co-occurrence feature is used for defining image features. Standard Sequential Forward Selection algorithm are used for feature selection to identify relevant features and redundancy among relevant features. Four color spaces, RGB, YCbCr, HSV, and Gaussian space are considered for computing color co-occurrence features. Gray-level image feature is also considered for the performance comparison reasons. The experimental results are presented.

Biological Feature Selection and Disease Gene Identification using New Stepwise Random Forests

  • Hwang, Wook-Yeon
    • Industrial Engineering and Management Systems
    • /
    • 제16권1호
    • /
    • pp.64-79
    • /
    • 2017
  • Identifying disease genes from human genome is a critical task in biomedical research. Important biological features to distinguish the disease genes from the non-disease genes have been mainly selected based on traditional feature selection approaches. However, the traditional feature selection approaches unnecessarily consider many unimportant biological features. As a result, although some of the existing classification techniques have been applied to disease gene identification, the prediction performance was not satisfactory. A small set of the most important biological features can enhance the accuracy of disease gene identification, as well as provide potentially useful knowledge for biologists or clinicians, who can further investigate the selected biological features as well as the potential disease genes. In this paper, we propose a new stepwise random forests (SRF) approach for biological feature selection and disease gene identification. The SRF approach consists of two stages. In the first stage, only important biological features are iteratively selected in a forward selection manner based on one-dimensional random forest regression, where the updated residual vector is considered as the current response vector. We can then determine a small set of important biological features. In the second stage, random forests classification with regard to the selected biological features is applied to identify disease genes. Our extensive experiments show that the proposed SRF approach outperforms the existing feature selection and classification techniques in terms of biological feature selection and disease gene identification.

Nonlinear Feature Transformation and Genetic Feature Selection: Improving System Security and Decreasing Computational Cost

  • Taghanaki, Saeid Asgari;Ansari, Mohammad Reza;Dehkordi, Behzad Zamani;Mousavi, Sayed Ali
    • ETRI Journal
    • /
    • 제34권6호
    • /
    • pp.847-857
    • /
    • 2012
  • Intrusion detection systems (IDSs) have an important effect on system defense and security. Recently, most IDS methods have used transformed features, selected features, or original features. Both feature transformation and feature selection have their advantages. Neighborhood component analysis feature transformation and genetic feature selection (NCAGAFS) is proposed in this research. NCAGAFS is based on soft computing and data mining and uses the advantages of both transformation and selection. This method transforms features via neighborhood component analysis and chooses the best features with a classifier based on a genetic feature selection method. This novel approach is verified using the KDD Cup99 dataset, demonstrating higher performances than other well-known methods under various classifiers have demonstrated.

ModifiedFAST: A New Optimal Feature Subset Selection Algorithm

  • Nagpal, Arpita;Gaur, Deepti
    • Journal of information and communication convergence engineering
    • /
    • 제13권2호
    • /
    • pp.113-122
    • /
    • 2015
  • Feature subset selection is as a pre-processing step in learning algorithms. In this paper, we propose an efficient algorithm, ModifiedFAST, for feature subset selection. This algorithm is suitable for text datasets, and uses the concept of information gain to remove irrelevant and redundant features. A new optimal value of the threshold for symmetric uncertainty, used to identify relevant features, is found. The thresholds used by previous feature selection algorithms such as FAST, Relief, and CFS were not optimal. It has been proven that the threshold value greatly affects the percentage of selected features and the classification accuracy. A new performance unified metric that combines accuracy and the number of features selected has been proposed and applied in the proposed algorithm. It was experimentally shown that the percentage of selected features obtained by the proposed algorithm was lower than that obtained using existing algorithms in most of the datasets. The effectiveness of our algorithm on the optimal threshold was statistically validated with other algorithms.

Development of Interactive Feature Selection Algorithm(IFS) for Emotion Recognition

  • Yang, Hyun-Chang;Kim, Ho-Duck;Park, Chang-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권4호
    • /
    • pp.282-287
    • /
    • 2006
  • This paper presents an original feature selection method for Emotion Recognition which includes many original elements. Feature selection has some merits regarding pattern recognition performance. Thus, we developed a method called thee 'Interactive Feature Selection' and the results (selected features) of the IFS were applied to an emotion recognition system (ERS), which was also implemented in this research. The innovative feature selection method was based on a Reinforcement Learning Algorithm and since it required responses from human users, it was denoted an 'Interactive Feature Selection'. By performing an IFS, we were able to obtain three top features and apply them to the ERS. Comparing those results from a random selection and Sequential Forward Selection (SFS) and Genetic Algorithm Feature Selection (GAFS), we verified that the top three features were better than the randomly selected feature set.

Diagnosis of Alzheimer's Disease using Wrapper Feature Selection Method

  • 비슈나비 라미네니;권구락
    • 스마트미디어저널
    • /
    • 제12권3호
    • /
    • pp.30-37
    • /
    • 2023
  • Alzheimer's disease (AD) symptoms are being treated by early diagnosis, where we can only slow the symptoms and research is still undergoing. In consideration, using T1-weighted images several classification models are proposed in Machine learning to identify AD. In this paper, we consider the improvised feature selection, to reduce the complexity by using wrapping techniques and Restricted Boltzmann Machine (RBM). This present work used the subcortical and cortical features of 278 subjects from the ADNI dataset to identify AD and sMRI. Multi-class classification is used for the experiment i.e., AD, EMCI, LMCI, HC. The proposed feature selection consists of Forward feature selection, Backward feature selection, and Combined PCA & RBM. Forward and backward feature selection methods use an iterative method starting being no features in the forward feature selection and backward feature selection with all features included in the technique. PCA is used to reduce the dimensions and RBM is used to select the best feature without interpreting the features. We have compared the three models with PCA to analysis. The following experiment shows that combined PCA &RBM, and backward feature selection give the best accuracy with respective classification model RF i.e., 88.65, 88.56% respectively.

New Feature Selection Method for Text Categorization

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • 제15권1호
    • /
    • pp.53-61
    • /
    • 2017
  • The preferred feature selection methods for text classification are filter-based. In a common filter-based feature selection scheme, unique scores are assigned to features; then, these features are sorted according to their scores. The last step is to add the top-N features to the feature set. In this paper, we propose an improved global feature selection scheme wherein its last step is modified to obtain a more representative feature set. The proposed method aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in the proposed method to label features according to their discriminative power on classes; these labels are used while producing the feature sets. Experimental results obtained using the well-known 20 Newsgroups and Reuters-21578 datasets with the k-nearest neighbor algorithm and a support vector machine indicate that the proposed method improves the classification performance in terms of a widely known metric ($F_1$).

거리 기반의 특징 선택을 이용한 간질 분류 (Classification of Epilepsy Using Distance-Based Feature Selection)

  • 이상홍
    • 디지털융복합연구
    • /
    • 제12권8호
    • /
    • pp.321-327
    • /
    • 2014
  • 특징 선택은 중복 또는 서로간의 관련이 없는 특징을 제거하여 분류 성능을 향상시키는 기술이다. 본 논문에서는 가중 퍼지소속함수 기반 신경망 (Neural Network with Weighted Fuzzy Membership Functions; NEWFM)에서 제공하는 가중 퍼지소속함수의 경계합 (Bounded Sum of Weighted Fuzzy Membership functions, BSWFM)의 무게중심간의 거리를 이용한 새로운 특징 선택을 제안하여 분류 성능을 향상시켰다. 이러한 거리 기반의 특징 선택을 이용하여 초기 24개의 특징으로부터 무게중심간의 거리가 짧은 특징을 하나씩 제거되면서 분류 성능이 가능 높은 22개의 최소 특징을 선택하였다. 이들 22개의 최소 특징을 NEWFM의 입력으로 사용하여 97.7%, 99.7%, 98.7%의 민감도, 특이도, 정확도를 각각 구하였다.

On the Data Features for Neighbor Path Selection in Computer Network with Regional Failure

  • Yong-Jin Lee
    • International journal of advanced smart convergence
    • /
    • 제12권3호
    • /
    • pp.13-18
    • /
    • 2023
  • This paper aims to investigate data features for neighbor path selection (NPS) in computer network with regional failures. It is necessary to find an available alternate communication path in advance when regional failures due to earthquakes or forest fires occur simultaneously. We describe previous general heuristics and simulation heuristic to solve the NPS problem in the regional fault network. The data features of general heuristics using proximity and sharing factor and the data features of simulation heuristic using machine learning are explained through examples. Simulation heuristic may be better than general heuristics in terms of communication success. However, additional data features are necessary in order to apply the simulation heuristic to the real environment. We propose novel data features for NPS in computer network with regional failures and Keras modeling for computing the communication success probability of candidate neighbor path.

Analyzing Factors Contributing to Research Performance using Backpropagation Neural Network and Support Vector Machine

  • Ermatita, Ermatita;Sanmorino, Ahmad;Samsuryadi, Samsuryadi;Rini, Dian Palupi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.153-172
    • /
    • 2022
  • In this study, the authors intend to analyze factors contributing to research performance using Backpropagation Neural Network and Support Vector Machine. The analyzing factors contributing to lecturer research performance start from defining the features. The next stage is to collect datasets based on defining features. Then transform the raw dataset into data ready to be processed. After the data is transformed, the next stage is the selection of features. Before the selection of features, the target feature is determined, namely research performance. The selection of features consists of Chi-Square selection (U), and Pearson correlation coefficient (CM). The selection of features produces eight factors contributing to lecturer research performance are Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Qualification (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). To analyze the factors, two data mining classifiers were involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). Evaluation of the data mining classifier with an accuracy score for BPNN of 95 percent, and SVM of 92 percent. The essence of this analysis is not to find the highest accuracy score, but rather whether the factors can pass the test phase with the expected results. The findings of this study reveal the factors that have a significant impact on research performance and vice versa.