• Title/Summary/Keyword: hyperparameters

Search Result 139, Processing Time 0.031 seconds

Hyperparameter experiments on end-to-end automatic speech recognition

  • Yang, Hyungwon;Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.45-51
    • /
    • 2021
  • End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that "num blocks" and "linear units" hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as "num blocks" and "linear units" hyperparameters' values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.

Supervised learning-based DDoS attacks detection: Tuning hyperparameters

  • Kim, Meejoung
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.560-573
    • /
    • 2019
  • Two supervised learning algorithms, a basic neural network and a long short-term memory recurrent neural network, are applied to traffic including DDoS attacks. The joint effects of preprocessing methods and hyperparameters for machine learning on performance are investigated. Values representing attack characteristics are extracted from datasets and preprocessed by two methods. Binary classification and two optimizers are used. Some hyperparameters are obtained exhaustively for fast and accurate detection, while others are fixed with constants to account for performance and data characteristics. An experiment is performed via TensorFlow on three traffic datasets. Three scenarios are considered to investigate the effects of learning former traffic on sequential traffic analysis and the effects of learning one dataset on application to another dataset, and determine whether the algorithms can be used for recent attack traffic. Experimental results show that the used preprocessing methods, neural network architectures and hyperparameters, and the optimizers are appropriate for DDoS attack detection. The obtained results provide a criterion for the detection accuracy of attacks.

Experimental performance analysis on the non-negative matrix factorization-based continuous wave reverberation suppression according to hyperparameters (비음수행렬분해 기반 연속파 잔향 제거 기법의 초매개변숫값에 따른 실험적 성능 분석)

  • Yongon Lee; Seokjin Lee;Kiman Kim;Geunhwan Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.1
    • /
    • pp.32-41
    • /
    • 2023
  • Recently, studies on reverberation suppression using Non-negative Matrix Factorization (NMF) have been actively conducted. The NMF method uses a cost function based on the Kullback-Leibler divergence for optimization. And some constraints are added such as temporal continuity, pulse length, and energy ratio between reverberation and target. The tendency of constraints are controlled by hyperparameters. Therefore, in order to effectively suppress reverberation, hyperparameters need to be optimized. However, related studies are insufficient so far. In this paper, the reverberation suppression performance according to the three hyperparameters of the NMF was analyzed by using sea experimental data. As a result of analysis, when the value of hyperparameters for time continuity and pulse length were high, the energy ratio between the reverberation and the target showed better performance at less than 0.4, but it was confirmed that there was variability depending on the ocean environment. It is expected that the analysis results in this paper will be utilized as a useful guideline for planning precise experiments for optimizing hyperparameters of NMF in the future.

Quantitative Analysis of Bayesian SPECT Reconstruction : Effects of Using Higher-Order Gibbs Priors

  • S. J. Lee
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.133-142
    • /
    • 1998
  • In Bayesian SPECT reconstruction, the incorporation of elaborate forms of priors can lead to improved quantitative performance in various statistical terms, such as bias and variance. In particular, the use of higher-order smoothing priors, such as the thin-plate prior, is known to exhibit improved bias behavior compared to the conventional smoothing priors such as the membrane prior. However, the bias advantage of the higher-order priors is effective only when the hyperparameters involved in the reconstruction algorithm are properly chosen. In this work, we further investigate the quantitative performance of the two representative smoothing priors-the thin plate and the membrane-by observing the behavior of the associated hyperparameters of the prior distributions. In our experiments we use Monte Carlo noise trials to calculate bias and variance of reconstruction estimates, and compare the performance of ML-EM estimates to that of regularized EM using both membrane and thin-plate priors, and also to that of filtered backprojection, where the membrane and thin plate models become simple apodizing filters of specified form. We finally show that the use of higher-order models yields excellent "robustness" in quantitative performance by demonstrating that the thin plate leads to very low bias error over a large range of hyperparameters, while keeping a reasonable variance. variance.

  • PDF

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

Hyperparameter Selection for APC-ECOC

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1219-1231
    • /
    • 2008
  • The main object of this paper is to develop a leave-one-out(LOO) bound of all pairwise comparison error correcting output codes (APC-ECOC). To avoid using classifiers whose corresponding target values are 0 in APC-ECOC and requiring pilot estimates we developed a bound based on mean misclassification probability(MMP). It can be used to tune kernel hyperparameters. Our empirical experiment using kernel mean squared estimate(KMSE) as the binary classifier indicates that the bound leads to good estimates of kernel hyperparameters.

  • PDF

Development of benthic macroinvertebrate species distribution models using the Bayesian optimization (베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발)

  • Go, ByeongGeon;Shin, Jihoon;Cha, Yoonkyung
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.4
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.

Model-independent Constraints on Type Ia Supernova Light-curve Hyperparameters and Reconstructions of the Expansion History of the Universe

  • Koo, Hanwool;Shafieloo, Arman;Keeley, Ryan E.;L'Huillier, Benjamin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.48.4-49
    • /
    • 2020
  • We reconstruct the expansion history of the universe using type Ia supernovae (SN Ia) in a manner independent of any cosmological model assumptions. To do so, we implement a nonparametric iterative smoothing method on the Joint Light-curve Analysis (JLA) data while exploring the SN Ia light-curve hyperparameter space by Markov Chain Monte Carlo (MCMC) sampling. We test to see how the posteriors of these hyperparameters depend on cosmology, whether using different dark energy models or reconstructions shift these posteriors. Our constraints on the SN Ia light-curve hyperparameters from our model-independent analysis are very consistent with the constraints from using different parameterizations of the equation of state of dark energy, namely the flat ΛCDM cosmology, the Chevallier-Polarski-Linder model, and the Phenomenologically Emergent Dark Energy (PEDE) model. This implies that the distance moduli constructed from the JLA data are mostly independent of the cosmological models. We also studied that the possibility the light-curve parameters evolve with redshift and our results show consistency with no evolution. The reconstructed expansion history of the universe and dark energy properties also seem to be in good agreement with the expectations of the standard ΛCDM model. However, our results also indicate that the data still allow for considerable flexibility in the expansion history of the universe. This work is published in ApJ.

  • PDF

A Bayesian Method for Narrowing the Scope fo Variable Selection in Binary Response t-Link Regression

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.4
    • /
    • pp.407-422
    • /
    • 2000
  • This article is concerned with the selecting predictor variables to be included in building a class of binary response t-link regression models where both probit and logistic regression models can e approximately taken as members of the class. It is based on a modification of the stochastic search variable selection method(SSVS), intended to propose and develop a Bayesian procedure that used probabilistic considerations for selecting promising subsets of predictor variables. The procedure reformulates the binary response t-link regression setup in a hierarchical truncated normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. In this setup, the most promising subset of predictors can be identified as that with highest posterior probability in the marginal posterior distribution of the hyperparameters. To highlight the merit of the procedure, an illustrative numerical example is given.

  • PDF

Multinomial Kernel Logistic Regression via Bound Optimization Approach

  • Shim, Joo-Yong;Hong, Dug-Hun;Kim, Dal-Ho;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.507-516
    • /
    • 2007
  • Multinomial logistic regression is probably the most popular representative of probabilistic discriminative classifiers for multiclass classification problems. In this paper, a kernel variant of multinomial logistic regression is proposed by combining a Newton's method with a bound optimization approach. This formulation allows us to apply highly efficient approximation methods that effectively overcomes conceptual and numerical problems of standard multiclass kernel classifiers. We also provide the approximate cross validation (ACV) method for choosing the hyperparameters which affect the performance of the proposed approach. Experimental results are then presented to indicate the performance of the proposed procedure.