• Title/Summary/Keyword: Regression

Search Result 34,554, Processing Time 0.05 seconds

Combining Ridge Regression and Latent Variable Regression

  • Kim, Jong-Duk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.1
    • /
    • pp.51-61
    • /
    • 2007
  • Ridge regression (RR), principal component regression (PCR) and partial least squares regression (PLS) are among popular regression methods for collinear data. While RR adds a small quantity called ridge constant to the diagonal of X'X to stabilize the matrix inversion and regression coefficients, PCR and PLS use latent variables derived from original variables to circumvent the collinearity problem. One problem of PCR and PLS is that they are very sensitive to overfitting. A new regression method is presented by combining RR and PCR and PLS, respectively, in a unified manner. It is intended to provide better predictive ability and improved stability for regression models. A real-world data from NIR spectroscopy is used to investigate the performance of the newly developed regression method.

  • PDF

THE CENSORED REGRESSION QUANTILE ESTIMATORS FOR NONLINEAR REGRESSION MODEL

  • Park, Seung-Hoe
    • Journal of applied mathematics & informatics
    • /
    • v.13 no.1_2
    • /
    • pp.373-384
    • /
    • 2003
  • In this paper, we consider the asymptotic properties of regression quantile estimators for the nonlinear regression model when dependent variables are subject to censoring time, and propose the sufficient conditions which ensure consistency and asymptotic normality for regression quantile estimators in censored nonlinear regression model. Also, we drive the asymptotic relative efficiency of the censored regression model with respect to the ordinary regression model.

Wage Determinants Analysis by Quantile Regression Tree

  • Chang, Young-Jae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.2
    • /
    • pp.293-301
    • /
    • 2012
  • Quantile regression proposed by Koenker and Bassett (1978) is a statistical technique that estimates conditional quantiles. The advantage of using quantile regression is the robustness in response to large outliers compared to ordinary least squares(OLS) regression. A regression tree approach has been applied to OLS problems to fit flexible models. Loh (2002) proposed the GUIDE algorithm that has a negligible selection bias and relatively low computational cost. Quantile regression can be regarded as an analogue of OLS, therefore it can also be applied to GUIDE regression tree method. Chaudhuri and Loh (2002) proposed a nonparametric quantile regression method that blends key features of piecewise polynomial quantile regression and tree-structured regression based on adaptive recursive partitioning. Lee and Lee (2006) investigated wage determinants in the Korean labor market using the Korean Labor and Income Panel Study(KLIPS). Following Lee and Lee, we fit three kinds of quantile regression tree models to KLIPS data with respect to the quantiles, 0.05, 0.2, 0.5, 0.8, and 0.95. Among the three models, multiple linear piecewise quantile regression model forms the shortest tree structure, while the piecewise constant quantile regression model has a deeper tree structure with more terminal nodes in general. Age, gender, marriage status, and education seem to be the determinants of the wage level throughout the quantiles; in addition, education experience appears as the important determinant of the wage level in the highly paid group.

More on directional regression

  • Kim, Kyongwon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.553-562
    • /
    • 2021
  • Directional regression (DR; Li and Wang, 2007) is well-known as an exhaustive sufficient dimension reduction method, and performs well in complex regression models to have linear and nonlinear trends. However, the extension of DR is not well-done upto date, so we will extend DR to accommodate multivariate regression and large p-small n regression. We propose three versions of DR for multivariate regression and discuss how DR is applicable for the latter regression case. Numerical studies confirm that DR is robust to the number of clusters and the choice of hierarchical-clustering or pooled DR.

Machine learning-based regression analysis for estimating Cerchar abrasivity index

  • Kwak, No-Sang;Ko, Tae Young
    • Geomechanics and Engineering
    • /
    • v.29 no.3
    • /
    • pp.219-228
    • /
    • 2022
  • The most widely used parameter to represent rock abrasiveness is the Cerchar abrasivity index (CAI). The CAI value can be applied to predict wear in TBM cutters. It has been extensively demonstrated that the CAI is affected significantly by cementation degree, strength, and amount of abrasive minerals, i.e., the quartz content or equivalent quartz content in rocks. The relationship between the properties of rocks and the CAI is investigated in this study. A database comprising 223 observations that includes rock types, uniaxial compressive strengths, Brazilian tensile strengths, equivalent quartz contents, quartz contents, brittleness indices, and CAIs is constructed. A linear model is developed by selecting independent variables while considering multicollinearity after performing multiple regression analyses. Machine learning-based regression methods including support vector regression, regression tree regression, k-nearest neighbors regression, random forest regression, and artificial neural network regression are used in addition to multiple linear regression. The results of the random forest regression model show that it yields the best prediction performance.

A Study on the Power Comparison between Logistic Regression and Offset Poisson Regression for Binary Data

  • Kim, Dae-Youb;Park, Heung-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.537-546
    • /
    • 2012
  • In this paper, for analyzing binary data, Poisson regression with offset and logistic regression are compared with respect to the power via simulations. Poisson distribution can be used as an approximation of binomial distribution when n is large and p is small; however, we investigate if the same conditions can be held for the power of significant tests between logistic regression and offset poisson regression. The result is that when offset size is large for rare events offset poisson regression has a similar power to logistic regression, but it has an acceptable power even with a moderate prevalence rate. However, with a small offset size (< 10), offset poisson regression should be used with caution for rare events or common events. These results would be good guidelines for users who want to use offset poisson regression models for binary data.

Biplots of Multivariate Data Guided by Linear and/or Logistic Regression

  • Huh, Myung-Hoe;Lee, Yonggoo
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.2
    • /
    • pp.129-136
    • /
    • 2013
  • Linear regression is the most basic statistical model for exploring the relationship between a numerical response variable and several explanatory variables. Logistic regression secures the role of linear regression for the dichotomous response variable. In this paper, we propose a biplot-type display of the multivariate data guided by the linear regression and/or the logistic regression. The figures show the directional flow of the response variable as well as the interrelationship of explanatory variables.

Fuzzy c-Regression Using Weighted LS-SVM

  • Hwang, Chang-Ha
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.161-169
    • /
    • 2005
  • In this paper we propose a fuzzy c-regression model based on weighted least squares support vector machine(LS-SVM), which can be used to detect outliers in the switching regression model while preserving simultaneous yielding the estimates of outputs together with a fuzzy c-partitions of data. It can be applied to the nonlinear regression which does not have an explicit form of the regression function. We illustrate the new algorithm with examples which indicate how it can be used to detect outliers and fit the mixed data to the nonlinear regression models.

  • PDF

ROBUST FUZZY LINEAR REGRESSION BASED ON M-ESTIMATORS

  • SOHN BANG-YONG
    • Journal of applied mathematics & informatics
    • /
    • v.18 no.1_2
    • /
    • pp.591-601
    • /
    • 2005
  • The results of fuzzy linear regression are very sensitive to irregular data. When this points exist in a set of data, a fuzzy linear regression model can be incorrectly interpreted. The purpose of this paper is to detect irregular data and to propose robust fuzzy linear regression based on M-estimators with triangular fuzzy regression coefficients for crisp input-output data. Numerical example shows that irregular data can be detected by using the residuals based on M-estimators, and the proposed robust fuzzy linear regression is very resistant to this points.

Interval Regression Models Using Variable Selection

  • Choi Seung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.125-134
    • /
    • 2006
  • This study confirms that the regression model of endpoint of interval outputs is not identical with that of the other endpoint of interval outputs in interval regression models proposed by Tanaka et al. (1987) and constructs interval regression models using the best regression model given by variable selection. Also, this paper suggests a method to minimize the sum of lengths of a symmetric difference among observed and predicted interval outputs in order to estimate interval regression coefficients in the proposed model. Some examples show that the interval regression model proposed in this study is more accuracy than that introduced by Inuiguchi et al. (2001).