Objective: We evaluated the inter-rater reliability (IRR) of assessing the quality of evidence (QoE) using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach.
Study design and setting: On completing two training exercises, participants worked independently as individual raters to assess the QoE of 16 outcomes. After recording their initial impression using a global rating, raters graded the QoE following the GRADE approach. Subsequently, randomly paired raters submitted a consensus rating.
Results: The IRR without using the GRADE approach for two individual raters was 0.31 (95% confidence interval [95% CI] = 0.21-0.42) among Health Research Methodology students (n = 10) and 0.27 (95% CI = 0.19-0.37) among the GRADE working group members (n = 15). The corresponding IRR of the GRADE approach in assessing the QoE was significantly higher, that is, 0.66 (95% CI = 0.56-0.75) and 0.72 (95% CI = 0.61-0.79), respectively. The IRR further increased for three (0.80 [95% CI = 0.73-0.86] and 0.74 [95% CI = 0.65-0.81]) or four raters (0.84 [95% CI = 0.78-0.89] and 0.79 [95% CI = 0.71-0.85]). The IRR did not improve when QoE was assessed through a consensus rating.
Conclusion: Our findings suggest that trained individuals using the GRADE approach improves reliability in comparison to intuitive judgments about the QoE and that two individual raters can reliably assess the QoE using the GRADE system.
Copyright © 2013 Elsevier Inc. All rights reserved.