Background: The tendency to add items to resident performance rating forms has accelerated due to new ACGME competency requirements. This study addresses the relative merits of adding items versus increasing number of observations. The specific questions addressed are (1) what is the reliability of single items used to assess resident performance, (2) what effect does adding items have on reliability, and (3) how many observations are required to obtain reliable resident performance ratings.
Methods: Surgeon ratings of resident performance were collected for 3 years. The rating instrument had 3 single items representing clinical performance, professional behavior, and comparisons to other house staff. Reliability analyses were performed separately for each year, and variance components were pooled across years to compute overall reliability coefficients.
Results: Single-item resident performance rating scales were equivalent to multiple-item scales using conventional reliability standards. Increasing the number of rating items had little effect on reliability. Increasing the number of observations had a much larger effect.
Conclusions: Program directors should focus on increasing the number of observations per resident to improve performance sampling and reliability of assessment. Increasing the number of rating items had little effect on reliability and is unlikely to assess new ACGME competencies adequately.