• Title/Summary/Keyword: inverse probability weighting

Search Result 20, Processing Time 0.027 seconds

Estimating causal effect of multi-valued treatment from observational survival data

  • Kim, Bongseong;Kim, Ji-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.6
    • /
    • pp.675-688
    • /
    • 2020
  • In survival analysis of observational data, the inverse probability weighting method and the Cox proportional hazards model are widely used when estimating the causal effects of multiple-valued treatment. In this paper, the two kinds of weights have been examined in the inverse probability weighting method. We explain the reason why the stabilized weight is more appropriate when an inverse probability weighting method using the generalized propensity score is applied. We also emphasize that a marginal hazard ratio and the conditional hazard ratio should be distinguished when defining the hazard ratio as a treatment effect under the Cox proportional hazards model. A simulation study based on real data is conducted to provide concrete numerical evidence.

Analysis of Nested Case-Control Study Designs: Revisiting the Inverse Probability Weighting Method

  • Kim, Ryung S.
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.455-466
    • /
    • 2013
  • In nested case-control studies, the most common way to make inference under a proportional hazards model is the conditional logistic approach of Thomas (1977). Inclusion probability methods are more efficient than the conditional logistic approach of Thomas; however, the epidemiology research community has not accepted the methods as a replacement of the Thomas' method. This paper promotes the inverse probability weighting method originally proposed by Samuelsen (1997) in combination with an approximate jackknife standard error that can be easily computed using existing software. Simulation studies demonstrate that this approach yields valid type 1 errors and greater powers than the conditional logistic approach in nested case-control designs across various sample sizes and magnitudes of the hazard ratios. A generalization of the method is also made to incorporate additional matching and the stratified Cox model. The proposed method is illustrated with data from a cohort of children with Wilm's tumor to study the association between histological signatures and relapses.

A comparison study of inverse censoring probability weighting in censored regression (중도절단 회귀모형에서 역절단확률가중 방법 간의 비교연구)

  • Shin, Jungmin;Kim, Hyungwoo;Shin, Seung Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.957-968
    • /
    • 2021
  • Inverse censoring probability weighting (ICPW) is a popular technique in survival data analysis. In applications of the ICPW technique such as the censored regression, it is crucial to accurately estimate the censoring probability. A simulation study is undertaken in this article to see how censoring probability estimate influences model performance in censored regression using the ICPW scheme. We compare three censoring probability estimators, including Kaplan-Meier (KM) estimator, Cox proportional hazard model estimator, and local KM estimator. For the local KM estimator, we propose to reduce the predictor dimension to avoid the curse of dimensionality and consider two popular dimension reduction tools: principal component analysis and sliced inverse regression. Finally, we found that the Cox proportional hazard model estimator shows the best performance as a censoring probability estimator in both mean and median censored regressions.

A simulation study for various propensity score weighting methods in clinical problematic situations (임상에서 발생할 수 있는 문제 상황에서의 성향 점수 가중치 방법에 대한 비교 모의실험 연구)

  • Siseong Jeong;Eun Jeong Min
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.381-397
    • /
    • 2023
  • The most representative design used in clinical trials is randomization, which is used to accurately estimate the treatment effect. However, comparison between the treatment group and the control group in an observational study without randomization is biased due to various unadjusted differences, such as characteristics between patients. Propensity score weighting is a widely used method to address these problems and to minimize bias by adjusting those confounding and assess treatment effects. Inverse probability weighting, the most popular method, assigns weights that are proportional to the inverse of the conditional probability of receiving a specific treatment assignment, given observed covariates. However, this method is often suffered by extreme propensity scores, resulting in biased estimates and excessive variance. Several alternative methods including trimming, overlap weights, and matching weights have been proposed to mitigate these issues. In this paper, we conduct a simulation study to compare performance of various propensity score weighting methods under diverse situation, such as limited overlap, misspecified propensity score, and treatment contrary to prediction. From the simulation results overlap weights and matching weights consistently outperform inverse probability weighting and trimming in terms of bias, root mean squared error and coverage probability.

Overview of estimating the average treatment effect using dimension reduction methods (차원축소 방법을 이용한 평균처리효과 추정에 대한 개요)

  • Mijeong Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.4
    • /
    • pp.323-335
    • /
    • 2023
  • In causal analysis of high dimensional data, it is important to reduce the dimension of covariates and transform them appropriately to control confounders that affect treatment and potential outcomes. The augmented inverse probability weighting (AIPW) method is mainly used for estimation of average treatment effect (ATE). AIPW estimator can be obtained by using estimated propensity score and outcome model. ATE estimator can be inconsistent or have large asymptotic variance when using estimated propensity score and outcome model obtained by parametric methods that includes all covariates, especially for high dimensional data. For this reason, an ATE estimation using an appropriate dimension reduction method and semiparametric model for high dimensional data is attracting attention. Semiparametric method or sparse sufficient dimensionality reduction method can be uesd for dimension reduction for the estimation of propensity score and outcome model. Recently, another method has been proposed that does not use propensity score and outcome regression. After reducing dimension of covariates, ATE estimation can be performed using matching. Among the studies on ATE estimation methods for high dimensional data, four recently proposed studies will be introduced, and how to interpret the estimated ATE will be discussed.

Associations Between Heart Rate Variability and Symptom Severity in Patients With Somatic Symptom Disorder (신체 증상 장애 환자의 심박변이도와 증상 심각도의 연관성)

  • Eunhwan Kim;Hesun Kim;Jinsil Ham;Joonbeom Kim;Jooyoung Oh
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.31 no.2
    • /
    • pp.108-117
    • /
    • 2023
  • Objectives : Somatic symptom disorder (SSD) is characterized by the manifestation of a variety of physical symptoms, but little is known about differences in autonomic nervous system activity according to symptom severity, especially within patient groups. In this study, we examined differences in heart rate variability (HRV) across symptom severity in a group of SSD patients to analyze a representative marker of autonomic nervous system changes by symptoms severity. Methods : Medical records were retrospectively reviewed for patients who were diagnosed with SSD based on DSM-5 from September 18, 2020 to October 29, 2021. We applied inverse probability of treatment weighting (IPTW) methods to generate more homogeneous comparisons in HRV parameters by correcting for selection biases due to sociodemographic and clinical characteristic differences between groups. Results : There were statistically significant correlations between the somatic symptom severity and LF (nu), HF (nu), LF/HF, as well as SD1/SD2 and Alpha1/Alpha2. After IPTW estimation, the mild to moderate group was corrected to 27 (53.0%) and the severe group to 24 (47.0%), and homogeneity was achieved as the differences in demographic and clinical characteristics were not significant. The analysis of inverse probability weighted regression adjustment model showed that the severe group was associated with significantly lower RMSSD (β=-0.70, p=0.003) and pNN20 (β=-1.04, p=0.019) in the time domain and higher LF (nu) (β=0.29, p<0.001), lower HF (nu) (β=-0.29, p<0.001), higher LF/HF (β=1.41, p=0.001), and in the nonlinear domain, significant differences were tested for SampEn15 (β=-0.35, p=0.014), SD1/SD2 (β=-0.68, p<0.001), and Alpha1/Alpha2 (ß=0.43, p=0.001). Conclusions : These results suggest that differences in HRV parameters by SSD severity were showed in the time, frequency and nonlinear domains, specific parameters demonstrating significantly higher sympathetic nerve activity and reduced ability of the parasympathetic nervous system in SSD patients with severe symptoms.

Performance study of propensity score methods against regression with covariate adjustment

  • Park, Jincheol
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.1
    • /
    • pp.217-227
    • /
    • 2015
  • In observational study, handling confounders is a primary issue in measuring treatment effect of interest. Historically, a regression with covariate adjustment (covariate-adjusted regression) has been the typical approach to estimate treatment effect incorporating potential confounders into model. However, ever since the introduction of the propensity score, covariate-adjusted regression has been gradually replaced in medical literatures with various balancing methods based on propensity score. On the other hand, there is only a paucity of researches assessing propensity score methods compared with the covariate-adjusted regression. This paper examined the performance of propensity score methods in estimating risk difference and compare their performance with the covariate-adjusted regression by a Monte Carlo study. The study demonstrated in general the covariate-adjusted regression with variable selection procedure outperformed propensity-score-based methods in terms both of bias and MSE, suggesting that the classical regression method needs to be considered, rather than the propensity score methods, if a performance is a primary concern.

Nonpararmetric estimation for interval censored competing risk data

  • Kim, Yang-Jin;Kwon, Do young
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.947-955
    • /
    • 2017
  • A competing risk analysis has been applied when subjects experience more than one type of end points. Geskus (2011) showed three types of estimators of CIF are equivalent under left truncated and right censored data. We extend his approach to an interval censored competing risk data by using a modified risk set and evaluate their performance under several sample sizes. These estimators show very similar results. We also suggest a test statistic combining Sun's test for interval censored data and Gray's test for right censored data. The test sizes and powers are compared under several cases. As a real data application, the suggested method is applied a data where the feasibility of the vaccine to HIV was assessed in the injecting drug uses.

Association measure of doubly interval censored data using a Kendall's 𝜏 estimator

  • Kang, Seo-Hyun;Kim, Yang-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.2
    • /
    • pp.151-159
    • /
    • 2021
  • In this article, our interest is to estimate the association between consecutive gap times which are subject to interval censoring. Such data are referred as doubly interval censored data (Sun, 2006). In a context of serial event, an induced dependent censoring frequently occurs, resulting in biased estimates. In this study, our goal is to propose a Kendall's 𝜏 based association measure for doubly interval censored data. For adjusting the impact of induced dependent censoring, the inverse probability censoring weighting (IPCW) technique is implemented. Furthermore, a multiple imputation technique is applied to recover unknown failure times owing to interval censoring. Simulation studies demonstrate that the suggested association estimator performs well with moderate sample sizes. The proposed method is applied to a dataset of children's dental records.

Application of Standardization for Causal Inference in Observational Studies: A Step-by-step Tutorial for Analysis Using R Software

  • Lee, Sangwon;Lee, Woojoo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.55 no.2
    • /
    • pp.116-124
    • /
    • 2022
  • Epidemiological studies typically examine the causal effect of exposure on a health outcome. Standardization is one of the most straightforward methods for estimating causal estimands. However, compared to inverse probability weighting, there is a lack of user-centric explanations for implementing standardization to estimate causal estimands. This paper explains the standardization method using basic R functions only and how it is linked to the R package stdReg, which can be used to implement the same procedure. We provide a step-by-step tutorial for estimating causal risk differences, causal risk ratios, and causal odds ratios based on standardization. We also discuss how to carry out subgroup analysis in detail.