Current guidelines recommend a multiparametric echocardiographic assessment of aortic regurgitation (AR). However, the absence of a hierarchical weighting of discordant parameters could cause interobserver variability. In the present study, we sought to define and improve the interobserver variability of AR assessment. Seventeen level 3 readers graded 20 randomly selected patients with AR. The readers also provided a usefulness score for each parameter, depending on its influence on their decision of the AR severity grade. A consensus strategy was subsequently formulated and validated against cardiac magnetic resonance imaging in a separate group of 80 patients. The readers were updated with the consensus document and recalibrated using the same cases. Agreement was statistically assessed using Randolph's free-marginal multirater kappa. At baseline, no uniform approach was used to combine the individual parameters, contributing to the interobserver variability (overall kappa 0.5). A consensus strategy to categorize AR severity was developed in which the left ventricular volume took precedence over the other parameters and was used to differentiate chronic severe AR from less severe categories. Recalibration of the readers using this consensus strategy improved concordance (kappa increased to 0.7). The new strategy also improved the accuracy relative to cardiac magnetic resonance imaging, as evidenced by full agreement on severe AR between the consensus document-based grading and AR severity defined by cardiac magnetic resonance imaging in the separate validation group of 80 patients. In conclusion, grading of chronic AR using a multiparametric approach has suboptimal consistency between readers and a left ventricular volume-based consensus document improved concordance and accuracy.
Copyright © 2012 Elsevier Inc. All rights reserved.