• Title/Summary/Keyword: Nadaraya Watson estimator

Search Result 10, Processing Time 0.027 seconds

Shifted Nadaraya Watson Estimator

  • Chung, Sung-S.
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.3
    • /
    • pp.881-890
    • /
    • 1997
  • The local linear estimator usually has more attractive properties than Nadaraya-Watson estimator. But the local linear estimator gives bad performance where data are sparse. Muller and Song proposed Shifted Nadaraya Watson estimator which has treated data sparsity well. We show that Shifted Nadaraya Watson estimator has good performance not only in the sparse region but also in the dense region, through the simulation study. Ans we suggest the boundary treatment of Shifted Nadaraya Watson estimator.

  • PDF

Stationary Bootstrapping for the Nonparametric AR-ARCH Model

  • Shin, Dong Wan;Hwang, Eunju
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.463-473
    • /
    • 2015
  • We consider a nonparametric AR(1) model with nonparametric ARCH(1) errors. In order to estimate the unknown function of the ARCH part, we apply the stationary bootstrap procedure, which is characterized by geometrically distributed random length of bootstrap blocks and has the advantage of capturing the dependence structure of the original data. The proposed method is composed of four steps: the first step estimates the AR part by a typical kernel smoothing to calculate AR residuals, the second step estimates the ARCH part via the Nadaraya-Watson kernel from the AR residuals to compute ARCH residuals, the third step applies the stationary bootstrap procedure to the ARCH residuals, and the fourth step defines the stationary bootstrapped Nadaraya-Watson estimator for the ARCH function with the stationary bootstrapped residuals. We prove the asymptotic validity of the stationary bootstrap estimator for the unknown ARCH function by showing the same limiting distribution as the Nadaraya-Watson estimator in the second step.

A Study on Bandwith Selection Based on ASE for Nonparametric Regression Estimator

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.1
    • /
    • pp.21-30
    • /
    • 2001
  • Suppose we observe a set of data (X$_1$,Y$_1$(, …, (X$_{n}$,Y$_{n}$) and use the Nadaraya-Watson regression estimator to estimate m(x)=E(Y│X=x). in this article bandwidth selection problem for the Nadaraya-Watson regression estimator is investigated. In particular cross validation method based on average square error(ASE) is considered. Theoretical results here include a central limit theorem that quantifies convergence rates of the bandwidth selector.tor.

  • PDF

Nonparametric Estimation of Univariate Binary Regression Function

  • Jung, Shin Ae;Kang, Kee-Hoon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.236-241
    • /
    • 2022
  • We consider methods of estimating a binary regression function using a nonparametric kernel estimation when there is only one covariate. For this, the Nadaraya-Watson estimation method using single and double bandwidths are used. For choosing a proper smoothing amount, the cross-validation and plug-in methods are compared. In the real data analysis for case study, German credit data and heart disease data are used. We examine whether the nonparametric estimation for binary regression function is successful with the smoothing parameter using the above two approaches, and the performance is compared.

Discontinuous log-variance function estimation with log-residuals adjusted by an estimator of jump size (점프크기추정량에 의한 수정된 로그잔차를 이용한 불연속 로그분산함수의 추정)

  • Hong, Hyeseon;Huh, Jib
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.2
    • /
    • pp.259-269
    • /
    • 2017
  • Due to the nonnegativity of variance, most of nonparametric estimations of discontinuous variance function have used the Nadaraya-Watson estimation with residuals. By the modification of Chen et al. (2009) and Yu and Jones (2004), Huh (2014, 2016a) proposed the estimators of the log-variance function instead of the variance function using the local linear estimator which has no boundary effect. Huh (2016b) estimated the variance function using the adjusted squared residuals by the estimated jump size in the discontinuous variance function. In this paper, we propose an estimator of the discontinuous log-variance function using the local linear estimator with the adjusted log-squared residuals by the estimated jump size of log-variance function like Huh (2016b). The numerical work demonstrates the performance of the proposed method with simulated and real examples.

Comparison study on kernel type estimators of discontinuous log-variance (불연속 로그분산함수의 커널추정량들의 비교 연구)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.1
    • /
    • pp.87-95
    • /
    • 2014
  • In the regression model, Kang and Huh (2006) studied the estimation of the discontinuous variance function using the Nadaraya-Watson estimator with the squared residuals. The local linear estimator of the log-variance function, which may have the whole real number, was proposed by Huh (2013) based on the kernel weighted local-likelihood of the ${\chi}^2$-distribution. Chen et al. (2009) estimated the continuous variance function using the local linear fit with the log-squared residuals. In this paper, the estimator of the discontinuous log-variance function itself or its derivative using Chen et al. (2009)'s estimator. Numerical works investigate the performances of the estimators with simulated examples.

Study on semi-supervised local constant regression estimation

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.3
    • /
    • pp.579-585
    • /
    • 2012
  • Many different semi-supervised learning algorithms have been proposed for use wit unlabeled data. However, most of them focus on classification problems. In this paper we propose a semi-supervised regression algorithm called the semi-supervised local constant estimator (SSLCE), based on the local constant estimator (LCE), and reveal the asymptotic properties of SSLCE. We also show that the SSLCE has a faster convergence rate than that of the LCE when a well chosen weighting factor is employed. Our experiment with synthetic data shows that the SSLCE can improve performance with unlabeled data, and we recommend its use with the proper size of unlabeled data.

Nonparametric estimation of the discontinuous variance function using adjusted residuals (잔차 수정을 이용한 불연속 분산함수의 비모수적 추정)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.1
    • /
    • pp.111-120
    • /
    • 2016
  • In usual, the discontinuous variance function was estimated nonparametrically using a kernel type estimator with data sets split by an estimated location of the change point. Kang et al. (2000) proposed the Gasser-$M{\ddot{u}}ller$ type kernel estimator of the discontinuous regression function using the adjusted observations of response variable by the estimated jump size of the change point in $M{\ddot{u}}ller$ (1992). The adjusted observations might be a random sample coming from a continuous regression function. In this paper, we estimate the variance function using the Nadaraya-Watson kernel type estimator using the adjusted squared residuals by the estimated location of the change point in the discontinuous variance function like Kang et al. (2000) did. The rate of convergence of integrated squared error of the proposed variance estimator is derived and numerical work demonstrates the improved performance of the method over the exist one with simulated examples.

Smoothing parameter selection in semi-supervised learning (준지도 학습의 모수 선택에 관한 연구)

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.993-1000
    • /
    • 2016
  • Semi-supervised learning makes it easy to use an unlabeled data in the supervised learning such as classification. Applying the semi-supervised learning on the regression analysis, we propose two methods for a better regression function estimation. The proposed methods have been assumed different marginal densities of independent variables and different smoothing parameters in unlabeled and labeled data. We shows that the overfitted pilot estimator should be used to achieve the fastest convergence rate and unlabeled data may help to improve the convergence rate with well estimated smoothing parameters. We also find the conditions of smoothing parameters to achieve optimal convergence rate.