Objective To analyze the reproducibility and intra- and interobserver agreement of the IDEAL classification for distal radius fractures. Methods This qualitative, analytical study evaluated 50 pairs of radiographs in two views of patients with distal radius fractures. There were ten observers with different levels of orthopedic training who assessed the radiographs in three different evaluations. The results underwent the Cohen and Fleiss Kappa tests to determine intra- and interobserver agreement levels. Statistical calculations used Excel and SPSS, version 26.0. Results The Cohen Kappa index values for intraobserver evaluation indicated poor to little agreement (-0.177-0.259), with statistical significance in only one instance. The Fleiss Kappa index values revealed little agreement among the resident group (0.277-0.383) with statistical significance, poor to little agreement among the general orthopedists (0.114-0.225) with statistical significance in most instances, and moderate agreement among hand surgeons (0.449-0.533) with statistical significance. Conclusion The IDEAL classification had interobserver agreement levels ranging from poor to moderate, influenced by the physicians' training level. Other intraobserver agreement levels ranged from poor to little agreement but without statistical significance.
Keywords: classification; radius fractures; reproducibility of results.
The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution 4.0 International License, permitting copying and reproduction so long as the original work is given appropriate credit ( https://creativecommons.org/licenses/by/4.0/ ).