• 제목/요약/키워드: ensemble methods

검색결과 274건 처리시간 0.026초

Improving Bagging Predictors

  • Kim, Hyun-Joong;Chung, Dong-Jun
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2005년도 추계 학술발표회 논문집
    • /
    • pp.141-146
    • /
    • 2005
  • Ensemble method has been known as one of the most powerful classification tools that can improve prediction accuracy. Ensemble method also has been understood as ‘perturb and combine’ strategy. Many studies have tried to develop ensemble methods by improving perturbation. In this paper, we propose two new ensemble methods that improve combining, based on the idea of pattern matching. In the experiment with simulation data and with real dataset, the proposed ensemble methods peformed better than bagging. The proposed ensemble methods give the most accurate prediction when the pruned tree was used as the base learner.

  • PDF

Ensemble Model Output Statistics를 이용한 평창지역 다중 모델 앙상블 결합 및 보정 (A Combination and Calibration of Multi-Model Ensemble of PyeongChang Area Using Ensemble Model Output Statistics)

  • 황유선;김찬수
    • 대기
    • /
    • 제28권3호
    • /
    • pp.247-261
    • /
    • 2018
  • The objective of this paper is to compare probabilistic temperature forecasts from different regional and global ensemble prediction systems over PyeongChang area. A statistical post-processing method is used to take into account combination and calibration of forecasts from different numerical prediction systems, laying greater weight on ensemble model that exhibits the best performance. Observations for temperature were obtained from the 30 stations in PyeongChang and three different ensemble forecasts derived from the European Centre for Medium-Range Weather Forecasts, Ensemble Prediction System for Global and Limited Area Ensemble Prediction System that were obtained between 1 May 2014 and 18 March 2017. Prior to applying to the post-processing methods, reliability analysis was conducted to identify the statistical consistency of ensemble forecasts and corresponding observations. Then, ensemble model output statistics and bias-corrected methods were applied to each raw ensemble model and then proposed weighted combination of ensembles. The results showed that the proposed methods provide improved performances than raw ensemble mean. In particular, multi-model forecast based on ensemble model output statistics was superior to the bias-corrected forecast in terms of deterministic prediction.

Double-Bagging Ensemble Using WAVE

  • Kim, Ahhyoun;Kim, Minji;Kim, Hyunjoong
    • Communications for Statistical Applications and Methods
    • /
    • 제21권5호
    • /
    • pp.411-422
    • /
    • 2014
  • A classification ensemble method aggregates different classifiers obtained from training data to classify new data points. Voting algorithms are typical tools to summarize the outputs of each classifier in an ensemble. WAVE, proposed by Kim et al. (2011), is a new weight-adjusted voting algorithm for ensembles of classifiers with an optimal weight vector. In this study, when constructing an ensemble, we applied the WAVE algorithm on the double-bagging method (Hothorn and Lausen, 2003) to observe if any significant improvement can be achieved on performance. The results showed that double-bagging using WAVE algorithm performs better than other ensemble methods that employ plurality voting. In addition, double-bagging with WAVE algorithm is comparable with the random forest ensemble method when the ensemble size is large.

사례 선택 기법을 활용한 앙상블 모형의 성능 개선 (Improving an Ensemble Model Using Instance Selection Method)

  • 민성환
    • 산업경영시스템학회지
    • /
    • 제39권1호
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

앙상블 방법에 따른 WRF/CMAQ 수치 모의 결과 비교 연구 - 2013년 부산지역 고농도 PM10 사례 (A Comparison Study of Ensemble Approach Using WRF/CMAQ Model - The High PM10 Episode in Busan)

  • 김태희;김유근;손장호;정주희
    • 한국대기환경학회지
    • /
    • 제32권5호
    • /
    • pp.513-525
    • /
    • 2016
  • To propose an effective ensemble methods in predicting $PM_{10}$ concentration, six experiments were designed by different ensemble average methods (e.g., non-weighted, single weighted, and cluster weighted methods). The single weighted method was calculated the weighted value using both multiple regression analysis and singular value decomposition and the cluster weighted method was estimated the weighted value based on temperature, relative humidity, and wind component using multiple regression analysis. The effects of ensemble average methods were significantly better in weighted average than non-weight. The results of ensemble experiments using weighted average methods were distinguished according to methods calculating the weighted value. The single weighted average method using multiple regression analysis showed the highest accuracy for hourly $PM_{10}$ concentration, and the cluster weighted average method based on relative humidity showed the highest accuracy for daily mean $PM_{10}$ concentration. However, the result of ensemble spread analysis showed better reliability in the single weighted average method than the cluster weighted average method based on relative humidity. Thus, the single weighted average method was the most effective method in this study case.

수중 표적 식별을 위한 앙상블 학습 (Ensemble Learning for Underwater Target Classification)

  • 석종원
    • 한국멀티미디어학회논문지
    • /
    • 제18권11호
    • /
    • pp.1261-1267
    • /
    • 2015
  • The problem of underwater target detection and classification has been attracted a substantial amount of attention and studied from many researchers for both military and non-military purposes. The difficulty is complicate due to various environmental conditions. In this paper, we study classifier ensemble methods for active sonar target classification to improve the classification performance. In general, classifier ensemble method is useful for classifiers whose variances relatively large such as decision trees and neural networks. Bagging, Random selection samples, Random subspace and Rotation forest are selected as classifier ensemble methods. Using the four ensemble methods based on 31 neural network classifiers, the classification tests were carried out and performances were compared.

특징 강화 방법의 앙상블을 이용한 화자 식별 (Speaker Identification Using an Ensemble of Feature Enhancement Methods)

  • 양일호;김민석;소병민;김명재;유하진
    • 말소리와 음성과학
    • /
    • 제3권2호
    • /
    • pp.71-78
    • /
    • 2011
  • In this paper, we propose an approach which constructs classifier ensembles of various channel compensation and feature enhancement methods. CMN and CMVN are used as channel compensation methods. PCA, kernel PCA, greedy kernel PCA, and kernel multimodal discriminant analysis are used as feature enhancement methods. The proposed ensemble system is constructed with the combination of 15 classifiers which include three channel compensation methods (including 'without compensation') and five feature enhancement methods (including 'without enhancement'). Experimental results show that the proposed ensemble system gives highest average speaker identification rate in various environments (channels, noises, and sessions).

  • PDF

Ensemble approach for improving prediction in kernel regression and classification

  • Han, Sunwoo;Hwang, Seongyun;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • 제23권4호
    • /
    • pp.355-362
    • /
    • 2016
  • Ensemble methods often help increase prediction ability in various predictive models by combining multiple weak learners and reducing the variability of the final predictive model. In this work, we demonstrate that ensemble methods also enhance the accuracy of prediction under kernel ridge regression and kernel logistic regression classification. Here we apply bagging and random forests to two kernel-based predictive models; and present the procedure of how bagging and random forests can be embedded in kernel-based predictive models. Our proposals are tested under numerous synthetic and real datasets; subsequently, they are compared with plain kernel-based predictive models and their subsampling approach. Numerical studies demonstrate that ensemble approach outperforms plain kernel-based predictive models.

Social Media Data Analysis Trends and Methods

  • Rokaya, Mahmoud;Al Azwari, Sanaa
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.358-368
    • /
    • 2022
  • Social media is a window for everyone, individuals, communities, and companies to spread ideas and promote trends and products. With these opportunities, challenges and problems related to security, privacy and rights arose. Also, the data accumulated from social media has become a fertile source for many analytics, inference, and experimentation with new technologies in the field of data science. In this chapter, emphasis will be given to methods of trend analysis, especially ensemble learning methods. Ensemble learning methods embrace the concept of cooperation between different learning methods rather than competition between them. Therefore, in this chapter, we will discuss the most important trends in ensemble learning and their applications in analysing social media data and anticipating the most important future trends.

데이터 전처리와 앙상블 기법을 통한 불균형 데이터의 분류모형 비교 연구 (A Comparison of Ensemble Methods Combining Resampling Techniques for Class Imbalanced Data)

  • 이희재;이성임
    • 응용통계연구
    • /
    • 제27권3호
    • /
    • pp.357-371
    • /
    • 2014
  • 최근 들어 데이터 마이닝의 분류문제에 있어 목표변수의 불균형 문제가 많은 관심을 받고 있다. 이러한 문제를 해결하기 위해, 이전 연구들은 원 자료에 대하여 데이터 전처리 과정을 실시했는데, 전처리 과정에는 목표변수의 다수계급을 소수계급의 비율에 맞게 조정하는 과소표집법, 소수계급을 복원추출하여 다수계급의 비율에 맞게 조정하는 과대표집법, 소수계급에 K-최근접 이웃 방법 등을 활용하여 과대표집법을 적용 후 다수계급에는 과소표집법을 적용한 하이브리드 기법 등이 있다. 또한 앙상블 기법도 이러한 불균형 데이터의 분류 성능을 높일 수 있다고 알려져 있어, 본 논문에서는 데이터의 전처리 과정과 앙상블 기법을 함께 고려한 여러 모형들을 사용하여, 불균형 자료에 대한 이들모형의 분류성능을 비교평가한다.