• Title, Summary, Keyword: regression

Search Result 28,766, Processing Time 0.072 seconds

Combining Ridge Regression and Latent Variable Regression

  • Kim, Jong-Duk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.1
    • /
    • pp.51-61
    • /
    • 2007
  • Ridge regression (RR), principal component regression (PCR) and partial least squares regression (PLS) are among popular regression methods for collinear data. While RR adds a small quantity called ridge constant to the diagonal of X'X to stabilize the matrix inversion and regression coefficients, PCR and PLS use latent variables derived from original variables to circumvent the collinearity problem. One problem of PCR and PLS is that they are very sensitive to overfitting. A new regression method is presented by combining RR and PCR and PLS, respectively, in a unified manner. It is intended to provide better predictive ability and improved stability for regression models. A real-world data from NIR spectroscopy is used to investigate the performance of the newly developed regression method.

  • PDF

Wage Determinants Analysis by Quantile Regression Tree

  • Chang, Young-Jae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.2
    • /
    • pp.293-301
    • /
    • 2012
  • Quantile regression proposed by Koenker and Bassett (1978) is a statistical technique that estimates conditional quantiles. The advantage of using quantile regression is the robustness in response to large outliers compared to ordinary least squares(OLS) regression. A regression tree approach has been applied to OLS problems to fit flexible models. Loh (2002) proposed the GUIDE algorithm that has a negligible selection bias and relatively low computational cost. Quantile regression can be regarded as an analogue of OLS, therefore it can also be applied to GUIDE regression tree method. Chaudhuri and Loh (2002) proposed a nonparametric quantile regression method that blends key features of piecewise polynomial quantile regression and tree-structured regression based on adaptive recursive partitioning. Lee and Lee (2006) investigated wage determinants in the Korean labor market using the Korean Labor and Income Panel Study(KLIPS). Following Lee and Lee, we fit three kinds of quantile regression tree models to KLIPS data with respect to the quantiles, 0.05, 0.2, 0.5, 0.8, and 0.95. Among the three models, multiple linear piecewise quantile regression model forms the shortest tree structure, while the piecewise constant quantile regression model has a deeper tree structure with more terminal nodes in general. Age, gender, marriage status, and education seem to be the determinants of the wage level throughout the quantiles; in addition, education experience appears as the important determinant of the wage level in the highly paid group.

Switching Regression Analysis via Fuzzy LS-SVM

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.609-617
    • /
    • 2006
  • A new fuzzy c-regression algorithm for switching regression analysis is presented, which combines fuzzy c-means clustering and least squares support vector machine. This algorithm can detect outliers in switching regression models while yielding the simultaneous estimates of the associated parameters together with a fuzzy c-partitions of data. It can be employed for the model-free nonlinear regression which does not assume the underlying form of the regression function. We illustrate the new approach with some numerical examples that show how it can be used to fit switching regression models to almost all types of mixed data.

  • PDF

A Study on the Power Comparison between Logistic Regression and Offset Poisson Regression for Binary Data

  • Kim, Dae-Youb;Park, Heung-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.537-546
    • /
    • 2012
  • In this paper, for analyzing binary data, Poisson regression with offset and logistic regression are compared with respect to the power via simulations. Poisson distribution can be used as an approximation of binomial distribution when n is large and p is small; however, we investigate if the same conditions can be held for the power of significant tests between logistic regression and offset poisson regression. The result is that when offset size is large for rare events offset poisson regression has a similar power to logistic regression, but it has an acceptable power even with a moderate prevalence rate. However, with a small offset size (< 10), offset poisson regression should be used with caution for rare events or common events. These results would be good guidelines for users who want to use offset poisson regression models for binary data.

Simplicial Regression Depth with Censored and Truncated Data

  • Park, Jinho
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.1
    • /
    • pp.167-175
    • /
    • 2003
  • In this paper we develop a robust procedure to estimate regression coefficients for a linear model with censored and truncated data based on simplicial regression depth. Simplicial depth of a point is defined as the proportion of data simplices containing it. This simplicial depth can be extended to regression problem with censored and truncated data. Any line can be given a depth and the deepest regression line is the line with the maximum simplicial regression depth. We show how the proposed regression performs through analyzing AIDS incubation data.

Interval Regression Models Using Variable Selection

  • Choi Seung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.125-134
    • /
    • 2006
  • This study confirms that the regression model of endpoint of interval outputs is not identical with that of the other endpoint of interval outputs in interval regression models proposed by Tanaka et al. (1987) and constructs interval regression models using the best regression model given by variable selection. Also, this paper suggests a method to minimize the sum of lengths of a symmetric difference among observed and predicted interval outputs in order to estimate interval regression coefficients in the proposed model. Some examples show that the interval regression model proposed in this study is more accuracy than that introduced by Inuiguchi et al. (2001).

A Study on the Several Robust Regression Estimators

  • Kim, Jee-Yun;Roh, Kyung-Mi;Hwang, Jin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.307-316
    • /
    • 2004
  • Principal Component Regression(PCR) and Partial Least Squares Regression(PLSR) are the two most popular regression techniques in chemometrics. In the field of chemometrics usually the number of regressor variables greatly exceeds the number of observation. So we have to reduce the number of regressors to avoid the identifiability problem. In this paper we compare PCR and PLSR techniques combined with various robust regression methods including regression depth estimation. We compare the efficiency, goodness-of-fit and robustness of each estimators under several contamination schemes.

  • PDF

Fuzzy c-Regression Using Weighted LS-SVM

  • Hwang, Chang-Ha
    • 한국데이터정보과학회:학술대회논문집
    • /
    • /
    • pp.161-169
    • /
    • 2005
  • In this paper we propose a fuzzy c-regression model based on weighted least squares support vector machine(LS-SVM), which can be used to detect outliers in the switching regression model while preserving simultaneous yielding the estimates of outputs together with a fuzzy c-partitions of data. It can be applied to the nonlinear regression which does not have an explicit form of the regression function. We illustrate the new algorithm with examples which indicate how it can be used to detect outliers and fit the mixed data to the nonlinear regression models.

  • PDF

Biplots of Multivariate Data Guided by Linear and/or Logistic Regression

  • Huh, Myung-Hoe;Lee, Yonggoo
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.2
    • /
    • pp.129-136
    • /
    • 2013
  • Linear regression is the most basic statistical model for exploring the relationship between a numerical response variable and several explanatory variables. Logistic regression secures the role of linear regression for the dichotomous response variable. In this paper, we propose a biplot-type display of the multivariate data guided by the linear regression and/or the logistic regression. The figures show the directional flow of the response variable as well as the interrelationship of explanatory variables.

Improving Polynomial Regression Using Principal Components Regression With the Example of the Numerical Inversion of Probability Generating Function (주성분회귀분석을 활용한 다항회귀분석 성능개선: PGF 수치역변환 사례를 중심으로)

  • Yang, Won Seok;Park, Hyun-Min
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.475-481
    • /
    • 2015
  • We use polynomial regression instead of linear regression if there is a nonlinear relation between a dependent variable and independent variables in a regression analysis. The performance of polynomial regression, however, may deteriorate because of the correlation caused by the power terms of independent variables. We present a polynomial regression model for the numerical inversion of PGF and show that polynomial regression results in the deterioration of the estimation of the coefficients. We apply principal components regression to the polynomial regression model and show that principal components regression dramatically improves the performance of the parameter estimation.