• 제목/요약/키워드: optimal number of clusters

검색결과 78건 처리시간 0.028초

Determining the Optimal Number of Signal Clusters Using Iterative HMM Classification

  • Ernest, Duker Junior;Kim, Yoon Joong
    • International journal of advanced smart convergence
    • /
    • 제7권2호
    • /
    • pp.33-37
    • /
    • 2018
  • In this study, we propose an iterative clustering algorithm that automatically clusters a set of voice signal data without a label into an optimal number of clusters and generates hmm model for each cluster. In the clustering process, the likelihood calculations of the clusters are performed using iterative hmm learning and testing while varying the number of clusters for given data, and the maximum likelihood estimation method is used to determine the optimal number of clusters. We tested the effectiveness of this clustering algorithm on a small-vocabulary digit clustering task by mapping the unsupervised decoded output of the optimal cluster to the ground-truth transcription, we found out that they were highly correlated.

실루엣을 적용한 그룹탐색 최적화 데이터클러스터링 (Group Search Optimization Data Clustering Using Silhouette)

  • 김성수;백준영;강범수
    • 한국경영과학회지
    • /
    • 제42권3호
    • /
    • pp.25-34
    • /
    • 2017
  • K-means is a popular and efficient data clustering method that only uses intra-cluster distance to establish a valid index with a previously fixed number of clusters. K-means is useless without a suitable number of clusters for unsupervised data. This paper aimsto propose the Group Search Optimization (GSO) using Silhouette to find the optimal data clustering solution with a number of clusters for unsupervised data. Silhouette can be used as valid index to decide the number of clusters and optimal solution by simultaneously considering intra- and inter-cluster distances. The performance of GSO using Silhouette is validated through several experiment and analysis of data sets.

An Optimal Clustering using Hybrid Self Organizing Map

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.10-14
    • /
    • 2006
  • Many clustering methods have been studied. For the most part of these methods may be needed to determine the number of clusters. But, there are few methods for determining the number of population clusters objectively. It is difficult to determine the cluster size. In general, the number of clusters is decided by subjectively prior knowledge. Because the results of clustering depend on the number of clusters, it must be determined seriously. In this paper, we propose an efficient method for determining the number of clusters using hybrid' self organizing map and new criterion for evaluating the clustering result. In the experiment, we verify our model to compare other clustering methods using the data sets from UCI machine learning repository.

클러스터 타당성 평가기준을 이용한 최적의 클러스터 수 결정을 위한 고속 탐색 알고리즘 (Fast Search Algorithm for Determining the Optimal Number of Clusters using Cluster Validity Index)

  • 이상욱
    • 한국콘텐츠학회논문지
    • /
    • 제9권9호
    • /
    • pp.80-89
    • /
    • 2009
  • 클러스터링 알고리즘에서 최적의 클러스터 수를 결정하기 위한 효율적인 고속 탐색 알고리즘을 소개한다. 제안하는 방법은 클러스터링 적합도의 척도로 사용되는 클러스터 타당성 평가기준을 토대로 한다. 데이터 집합에 클러스터링 프로세스를 진행하여 최적의 클러스터 형상에 도달하게 되면 클러스터 타당성 평가기준은 최대 혹은 최소값을 가질 것으로 기대한다. 본 논문에서는 최적의 클러스터 개수를 찾기 위한 고속의 비소모적 탐색 방법을 설계하고 실제 클러스터링과 접목한다. 제안하는 알고리즘은 k-means++ 클러스터링 알고리즘에 적용하였고, 클러스터 타당성 평가기준으로써 CB 및 PBM 타당성 평가기준 방법을 사용하였다. 몇몇의 가상 데이터 집합과 실제 데이터 집합에 실험한 결과, 제안하는 방법은 정확도의 손실 없이 계산 효율을 획기적으로 증가시킴을 보여주었다.

The Effect of the Number of Clusters on Speech Recognition with Clustering by ART2/LBG

  • Lee, Chang-Young
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.3-8
    • /
    • 2009
  • In an effort to improve speech recognition, we investigated the effect of the number of clusters. In usual LBG clustering, the number of codebook clusters is doubled on each bifurcation and hence cannot be chosen arbitrarily in a natural way. To have the number of clusters at our control, we combined adaptive resonance theory (ART2) with LBG and perform the clustering in two stages. The codebook thus formed was used in subsequent processing of fuzzy vector quantization (FVQ) and HMM for speech recognition tests. Compared to conventional LBG, our method was shown to reduce the best recognition error rate by 0${\sim$}0.9% depending on the vocabulary size. The result also showed that between 400 and 800 would be the optimal number of clusters in the limit of small and large vocabulary speech recognitions of isolated words, respectively.

  • PDF

Optimal Combination of VNTR Typing for Discrimination of Isolated Mycobacterium tuberculosis in Korea

  • Lee, Jihye;Kang, Heeyoon;Kim, Sarang;Yoo, Heekyung;Kim, Hee Jin;Park, Young Kil
    • Tuberculosis and Respiratory Diseases
    • /
    • 제76권2호
    • /
    • pp.59-65
    • /
    • 2014
  • Background: Variable-number tandem repeat (VNTR) typing is a promising method to discriminate the Mycobacterium tuberculosis isolates in molecular epidemiology. The purpose of this study is to determine the optimal VNTR combinations for discriminating isolated M. tuberculosis strains in Korea. Methods: A total of 317 clinical isolates collected throughout Korea were genotyped by using the IS6110 restriction fragment length polymorphism (RFLP), and then analysed for the number of VNTR copies from 32 VNTR loci. Results: The results of discriminatory power according to diverse combinations were as follows: 25 clusters in 83 strains were yielded from the internationally standardized 15 VNTR loci (Hunter-Gaston discriminatory index [HGDI], 0.9958), 25 clusters in 65 strains by using IS6110 RFLP (HGDI, 0.9977), 14 clusters in 32 strains in 12 hyper-variable VNTR loci (HGDI, 0.9995), 6 clusters in 13 strains in 32 VNTR loci (HDGI, 0.9998), and 7 clusters in 14 strains of both the 12 hyper-variable VNTR and IS6110 RFLP (HDGI, 0.9999). Conclusion: The combination of 12 hyper-variable VNTR typing can be an effective tool for genotyping Korean M. tuberculosis isolates where the Beijing strains are predominant.

최적에 가까운 군집화를 위한 이단계 방법 (A Two-Stage Method for Near-Optimal Clustering)

  • 윤복식
    • 한국경영과학회지
    • /
    • 제29권1호
    • /
    • pp.43-56
    • /
    • 2004
  • The purpose of clustering is to partition a set of objects into several clusters based on some appropriate similarity measure. In most cases, clustering is considered without any prior information on the number of clusters or the structure of the given data, which makes clustering is one example of very complicated combinatorial optimization problems. In this paper we propose a general-purpose clustering method that can determine the proper number of clusters as well as efficiently carry out clustering analysis for various types of data. The method is composed of two stages. In the first stage, two different hierarchical clustering methods are used to get a reasonably good clustering result, which is improved In the second stage by ASA(accelerated simulated annealing) algorithm equipped with specially designed perturbation schemes. Extensive experimental results are given to demonstrate the apparent usefulness of our ASA clustering method.

An Improved Automated Spectral Clustering Algorithm

  • Xiaodan Lv
    • Journal of Information Processing Systems
    • /
    • 제20권2호
    • /
    • pp.185-199
    • /
    • 2024
  • In this paper, an improved automated spectral clustering (IASC) algorithm is proposed to address the limitations of the traditional spectral clustering (TSC) algorithm, particularly its inability to automatically determine the number of clusters. Firstly, a cluster number evaluation factor based on the optimal clustering principle is proposed. By iterating through different k values, the value corresponding to the largest evaluation factor was selected as the first-rank number of clusters. Secondly, the IASC algorithm adopts a density-sensitive distance to measure the similarity between the sample points. This rendered a high similarity to the data distributed in the same high-density area. Thirdly, to improve clustering accuracy, the IASC algorithm uses the cosine angle classification method instead of K-means to classify the eigenvectors. Six algorithms-K-means, fuzzy C-means, TSC, EIGENGAP, DBSCAN, and density peak-were compared with the proposed algorithm on six datasets. The results show that the IASC algorithm not only automatically determines the number of clusters but also obtains better clustering accuracy on both synthetic and UCI datasets.

음성 인식에서 음소 클러스터 수의 효과 (The Effect of the Number of Phoneme Clusters on Speech Recognition)

  • 이창영
    • 한국전자통신학회논문지
    • /
    • 제9권11호
    • /
    • pp.1221-1226
    • /
    • 2014
  • 본 논문에서는 음성 인식의 효율을 높이기 위하여 음소 클러스터 개수의 효과에 대해 연구하였다. 이를 위하여 음소 클러스터 개수를 바꾸어 가면서 수정된 k-평균 군집 알고리듬을 사용하여 코우드북을 작성하였다. 그런 다음, 퍼지 벡터 양자화와 은닉 마코브 모델을 사용하여 음성인식 테스트를 수행하였다. 실험 결과 두 개의 영역이 구분되어 나타났다. 음소 클러스터 개수가 클 때 인식 성능은 대체로 그와 무관하지만, 개수가 작을 때에는 그 감소와 더불어 인식 오류율이 비선형적으로 증가하는 것으로 나타났다. 수치 해석적 계산으로부터, 이 비선형 영역은 멱승함수에 의해 모델링 될 수 있었다. 또한 300개의 고립단어 인식의 경우에, 166개의 음소클러스터가 최적의 수임을 보일 수 있었다. 이는 음소당 3개 정도의 변화에 해당하는 값이다.

Improvement of Self Organizing Maps using Gap Statistic and Probability Distribution

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.116-120
    • /
    • 2008
  • Clustering is a method for unsupervised learning. General clustering tools have been depended on statistical methods and machine learning algorithms. One of the popular clustering algorithms based on machine learning is the self organizing map(SOM). SOM is a neural networks model for clustering. SOM and extended SOM have been used in diverse classification and clustering fields such as data mining. But, SOM has had a problem determining optimal number of clusters. In this paper, we propose an improvement of SOM using gap statistic and probability distribution. The gap statistic was introduced to estimate the number of clusters in a dataset. We use gap statistic for settling the problem of SOM. Also, in our research, weights of feature nodes are updated by probability distribution. After complete updating according to prior and posterior distributions, the weights of SOM have probability distributions for optima clustering. To verify improved performance of our work, we make experiments compared with other learning algorithms using simulation data sets.