Interrater Reliability in the Content Analysis of Preparatory Information for Mechanically Ventilated Patients

인공호흡기 사용 환자들에게 제공된 예비적 정보에 대한 내용분석의 측정자간 신뢰도

  • Kim Hwa-Soon (Department of Nursing, College of Medicine, Inha University)
  • 김화순 (인하대학교 의과대학 간호학과)
  • Published : 1998.12.05

Abstract

In nursing research that the data is collected through clinical observation, analysis of clinical recording or coding of interpersonal interaction in clinical areas, testing and reporting interrater reliability is very important to assure reliable results. Procedures for interrater reliability in these studies should follow two steps. The first step is to determine unitizing reliability, which is defined as consistency in the identification of same data elements in the record by two or more raters reviewing the same record. Unitizing reliability have been rarely reported in previous studies. Unitizing reliability should be tested before progressing to the next step as precondition. Next step is to determine interpretive reliability. Cohen's kappa is a preferable method of calculating the extent of agreement between observer or judges because it provides beyond-chance agreement. Despite its usefulness, kappa can sometimes present paradoxical conclusions and can be difficult to interpret. These difficulties result from the feature of kappa which is affected in complex ways by the presence of bias between observers and by true prevalence of certain categories. Therefore, percentage agreement should be reported with kappa for adequate interpretation of kappa. The presence of bias should be assessed using the bias index and the effect of prevalence should be assessed using the prevalence index. Researchers have been reported only global reliability reflecting the extent to which coders can consistently use the whole coding system across all categories. Category-by-category reliability also need to be reported to inform the possibility that some categories are harder to use than others.

Keywords