• Title/Summary/Keyword: Feature statistics

Search Result 251, Processing Time 0.027 seconds

Multivariate Control Charts for Means and Variances with Variable Sampling Intervals

  • Kim, Jae-Joo;Cho, Gyo-Young;Chang, Duk-Joon
    • Journal of Korean Society for Quality Management
    • /
    • v.22 no.1
    • /
    • pp.66-81
    • /
    • 1994
  • Several sample statistics to simultaneously monitor both means and variances for multivariate quality characteristics under multivariate normal process are proposed. Performances of multivariate Shewhart schemes and cumulative sum(CUSUM) schemes are evaluated for matched fixed sampling interval(FSI) and variable sampling interval(VSI) feature. Numerical results show that multivariate CUSUM charts are more efficient than Shewhart charts for small or moderate shifts and VSI feature is more efficient than FSI feature.

  • PDF

Optimal k-Nearest Neighborhood Classifier Using Genetic Algorithm (유전알고리즘을 이용한 최적 k-최근접이웃 분류기)

  • Park, Chong-Sun;Huh, Kyun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.1
    • /
    • pp.17-27
    • /
    • 2010
  • Feature selection and feature weighting are useful techniques for improving the classification accuracy of k-Nearest Neighbor (k-NN) classifier. The main propose of feature selection and feature weighting is to reduce the number of features, by eliminating irrelevant and redundant features, while simultaneously maintaining or enhancing classification accuracy. In this paper, a novel hybrid approach is proposed for simultaneous feature selection, feature weighting and choice of k in k-NN classifier based on Genetic Algorithm. The results have indicated that the proposed algorithm is quite comparable with and superior to existing classifiers with or without feature selection and feature weighting capability.

Comparison of Feature Selection Methods in Support Vector Machines (지지벡터기계의 변수 선택방법 비교)

  • Kim, Kwangsu;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.1
    • /
    • pp.131-139
    • /
    • 2013
  • Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern feature selection in the linear SVM through penalty functions yielding sparse solutions. Note that one usually adopts nonlinear kernels for the accuracy of classification in practice. Hence feature selection is still desirable for nonlinear SVMs. In this paper, we compare the performances of nonlinear feature selection methods such as component selection and smoothing operator(COSSO) and kernel iterative feature extraction(KNIFE) on simulated and real data sets.

Gene Selection Based on Support Vector Machine using Bootstrap (붓스트랩 방법을 활용한 SVM 기반 유전자 선택 기법)

  • Song, Seuck-Heun;Kim, Kyoung-Hee;Park, Chang-Yi;Koo, Ja-Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.3
    • /
    • pp.531-540
    • /
    • 2007
  • The recursive feature elimination for support vector machine is known to be useful in selecting relevant genes. Since the criterion for choosing relevant genes is the absolute value of a coefficient, the recursive feature elimination may suffer from a scaling problem. We propose a modified version of the recursive feature elimination algorithm using bootstrap. In our method, the criterion for determining relevant genes is the absolute value of a coefficient divided by its standard error, which accounts for statistical variability of the coefficient. Through numerical examples, we illustrate that our method is effective in gene selection.

A Discrete Feature Vector for Endpoint Detection of Speech with Hidden Markov Model (숨은마코프모형을 이용하는 음성 끝점 검출을 위한 이산 특징벡터)

  • Lee, Jei-Ky;Oh, Chang-Hyuck
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.6
    • /
    • pp.959-967
    • /
    • 2008
  • The purpose of this paper is to suggest a discrete feature vector, robust in various levels of noisy environment and inexpensive in computation, for detection of speech segments and is to show such properties of the feature with real speech data. The suggested feature is one dimensional vector which represents slope of short term energies and is discretized into three values to reduce computational burden of computations in HMM. In experiments with speech data, the method with the suggested feature vector showed good performance even in noisy environments.

A Study on the Robust Content-Based Musical Genre Classification System Using Multi-Feature Clustering (Multi-Feature Clustering을 이용한 강인한 내용 기반 음악 장르 분류 시스템에 관한 연구)

  • Yoon Won-Jung;Lee Kang-Kyu;Park Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.115-120
    • /
    • 2005
  • In this paper, we propose a new robust content-based musical genre classification algorithm using multi-feature clustering(MFC) method. In contrast to previous works, this paper focuses on two practical issues of the system dependency problem on different input query patterns(or portions) and input query lengths which causes serious uncertainty of the system performance. In order to solve these problems, a new approach called multi-feature clustering(MFC) based on k-means clustering is proposed. To verify the performance of the proposed method, several excerpts with variable duration were extracted from every other position in a queried music file. Effectiveness of the system with MFC and without MFC is compared in terms of the classification accuracy. It is demonstrated that the use of MFC significantly improves the system stability of musical genre classification performance with higher accuracy rate.

Feature selection in the semivarying coefficient LS-SVR

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.461-471
    • /
    • 2017
  • In this paper we propose a feature selection method identifying important features in the semivarying coefficient model. One important issue in semivarying coefficient model is how to estimate the parametric and nonparametric components. Another issue is how to identify important features in the varying and the constant effects. We propose a feature selection method able to address this issue using generalized cross validation functions of the varying coefficient least squares support vector regression (LS-SVR) and the linear LS-SVR. Numerical studies indicate that the proposed method is quite effective in identifying important features in the varying and the constant effects in the semivarying coefficient model.

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF

Feature Extraction and Statistical Pattern Recognition for Image Data using Wavelet Decomposition

  • Kim, Min-Soo;Baek, Jang-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.3
    • /
    • pp.831-842
    • /
    • 1999
  • We propose a wavelet decomposition feature extraction method for the hand-written character recognition. Comparing the recognition rates of which methods with original image features and with selected features by the wavelet decomposition we study the characteristics of the proposed method. LDA(Linear Discriminant Analysis) QDA(Quadratic Discriminant Analysis) RDA(Regularized Discriminant Analysis) and NN(Neural network) are used for the calculation of recognition rates. 6000 hand-written numerals from CENPARMI at Concordia University are used for the experiment. We found that the set of significantly selected wavelet decomposed features generates higher recognition rate than the original image features.

  • PDF

The Target Detection and Classification Method Using SURF Feature Points and Image Displacement in Infrared Images (적외선 영상에서 변위추정 및 SURF 특징을 이용한 표적 탐지 분류 기법)

  • Kim, Jae-Hyup;Choi, Bong-Joon;Chun, Seung-Woo;Lee, Jong-Min;Moon, Young-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.43-52
    • /
    • 2014
  • In this paper, we propose the target detection method using image displacement, and classification method using SURF(Speeded Up Robust Features) feature points and BAS(Beam Angle Statistics) in infrared images. The SURF method that is a typical correspondence matching method in the area of image processing has been widely used, because it is significantly faster than the SIFT(Scale Invariant Feature Transform) method, and produces a similar performance. In addition, in most SURF based object recognition method, it consists of feature point extraction and matching process. In proposed method, it detects the target area using the displacement, and target classification is performed by using the geometry of SURF feature points. The proposed method was applied to the unmanned target detection/recognition system. The experimental results in virtual images and real images, we have approximately 73~85% of the classification performance.