Objective: To investigate the reporting of the analysis of interobserver and intra-observer variability within clinical research studies from five high-impact cardiology journals published in 2005.
Study design and setting: A cross-sectional study using a combined electronic and manual search identified 180 of 511 eligible articles that reported the assessment of observer variability. Sixty of these were randomly selected for detailed review.
Results: The proportion of the 60 studies reporting interobserver variability, intra-observer variability, or both were 27%, 17%, and 53%, respectively. The reported methodological design of interobserver and intra-observer analyses included a specific protocol in 42% and 33%, identified observers as independent in 31% and 17%, as blinded in 50% and 31%, and identified a prior statistical plan in only 33% and 36%, respectively. Pearson correlation was the most reported measure for continuous variables, and the methods of Bland and Altman were reported in 15% of interobserver and 14% of intra-observer studies, respectively. For categorical variables, a kappa statistic was reported in 82% and 80%, respectively.
Conclusion: Reliability assessment is hampered by unclear and incomplete reporting of interobserver and intra-observer analysis. For continuous variables, inappropriate methods were most frequently reported as being done.