• Title/Summary/Keyword: kernel trick

Search Result 22, Processing Time 0.033 seconds

Kernel-Trick Regression and Classification

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.201-207
    • /
    • 2015
  • Support vector machine (SVM) is a well known kernel-trick supervised learning tool. This study proposes a working scheme for kernel-trick regression and classification (KtRC) as a SVM alternative. KtRC fits the model on a number of random subsamples and selects the best model. Empirical examples and a simulation study indicate that KtRC's performance is comparable to SVM.

On the Support Vector Machine with the kernel of the q-normal distribution

  • Joguchi, Hirofumi;Tanaka, Masaru
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.983-986
    • /
    • 2002
  • Support Vector Machine (SVM) is one of the methods of pattern recognition that separate input data using hyperplane. This method has high capability of pattern recognition by using the technique, which says kernel trick, and the Radial basis function (RBF) kernel is usually used as a kernel function in kernel trick. In this paper we propose using the q-normal distribution to the kernel function, instead of conventional RBF, and compare two types of the kernel function.

  • PDF

The Kernel Trick for Content-Based Media Retrieval in Online Social Networks

  • Cha, Guang-Ho
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.1020-1033
    • /
    • 2021
  • Nowadays, online or mobile social network services (SNS) are very popular and widely spread in our society and daily lives to instantly share, disseminate, and search information. In particular, SNS such as YouTube, Flickr, Facebook, and Amazon allow users to upload billions of images or videos and also provide a number of multimedia information to users. Information retrieval in multimedia-rich SNS is very useful but challenging task. Content-based media retrieval (CBMR) is the process of obtaining the relevant image or video objects for a given query from a collection of information sources. However, CBMR suffers from the dimensionality curse due to inherent high dimensionality features of media data. This paper investigates the effectiveness of the kernel trick in CBMR, specifically, the kernel principal component analysis (KPCA) for dimensionality reduction. KPCA is a nonlinear extension of linear principal component analysis (LPCA) to discovering nonlinear embeddings using the kernel trick. The fundamental idea of KPCA is mapping the input data into a highdimensional feature space through a nonlinear kernel function and then computing the principal components on that mapped space. This paper investigates the potential of KPCA in CBMR for feature extraction or dimensionality reduction. Using the Gaussian kernel in our experiments, we compute the principal components of an image dataset in the transformed space and then we use them as new feature dimensions for the image dataset. Moreover, KPCA can be applied to other many domains including CBMR, where LPCA has been used to extract features and where the nonlinear extension would be effective. Our results from extensive experiments demonstrate that the potential of KPCA is very encouraging compared with LPCA in CBMR.

Semiparametric Kernel Poisson Regression for Longitudinal Count Data

  • Hwang, Chang-Ha;Shim, Joo-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.1003-1011
    • /
    • 2008
  • Mixed-effect Poisson regression models are widely used for analysis of correlated count data such as those found in longitudinal studies. In this paper, we consider kernel extensions with semiparametric fixed effects and parametric random effects. The estimation is through the penalized likelihood method based on kernel trick and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of hyperparameters, cross-validation techniques are employed. Examples illustrating usage and features of the proposed method are provided.

Estimating Variance Function with Kernel Machine

  • Kim, Jong-Tae;Hwang, Chang-Ha;Park, Hye-Jung;Shim, Joo-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.2
    • /
    • pp.383-388
    • /
    • 2009
  • In this paper we propose a variance function estimation method based on kernel trick for replicated data or data consisted of sample variances. Newton-Raphson method is used to obtain associated parameter vector. Furthermore, the generalized approximate cross validation function is introduced to select the hyper-parameters which affect the performance of the proposed variance function estimation method. Experimental results are then presented which illustrate the performance of the proposed procedure.

Semiparametric kernel logistic regression with longitudinal data

  • Shim, Joo-Yong;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.385-392
    • /
    • 2012
  • Logistic regression is a well known binary classification method in the field of statistical learning. Mixed-effect regression models are widely used for the analysis of correlated data such as those found in longitudinal studies. We consider kernel extensions with semiparametric fixed effects and parametric random effects for the logistic regression. The estimation is performed through the penalized likelihood method based on kernel trick, and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of optimal hyperparameters, cross-validation techniques are employed. Numerical results are then presented to indicate the performance of the proposed procedure.

Arrow Diagrams for Kernel Principal Component Analysis

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.3
    • /
    • pp.175-184
    • /
    • 2013
  • Kernel principal component analysis(PCA) maps observations in nonlinear feature space to a reduced dimensional plane of principal components. We do not need to specify the feature space explicitly because the procedure uses the kernel trick. In this paper, we propose a graphical scheme to represent variables in the kernel principal component analysis. In addition, we propose an index for individual variables to measure the importance in the principal component plane.

Mixed Effects Kernel Binomial Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1327-1334
    • /
    • 2008
  • Mixed effect binomial regression models are widely used for analysis of correlated count data in which the response is the result of a series of one of two possible disjoint outcomes. In this paper, we consider kernel extensions with nonparametric fixed effects and parametric random effects. The estimation is through the penalized likelihood method based on kernel trick, and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of hyperparameters, cross-validation techniques are employed. Examples illustrating usage and features of the proposed method are provided.

  • PDF

Ensemble approach for improving prediction in kernel regression and classification

  • Han, Sunwoo;Hwang, Seongyun;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.4
    • /
    • pp.355-362
    • /
    • 2016
  • Ensemble methods often help increase prediction ability in various predictive models by combining multiple weak learners and reducing the variability of the final predictive model. In this work, we demonstrate that ensemble methods also enhance the accuracy of prediction under kernel ridge regression and kernel logistic regression classification. Here we apply bagging and random forests to two kernel-based predictive models; and present the procedure of how bagging and random forests can be embedded in kernel-based predictive models. Our proposals are tested under numerous synthetic and real datasets; subsequently, they are compared with plain kernel-based predictive models and their subsampling approach. Numerical studies demonstrate that ensemble approach outperforms plain kernel-based predictive models.

SVM-Guided Biplot of Observations and Variables

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.491-498
    • /
    • 2013
  • We consider support vector machines(SVM) to predict Y with p numerical variables $X_1$, ${\ldots}$, $X_p$. This paper aims to build a biplot of p explanatory variables, in which the first dimension indicates the direction of SVM classification and/or regression fits. We use the geometric scheme of kernel principal component analysis adapted to map n observations on the two-dimensional projection plane of which one axis is determined by a SVM model a priori.