Background: It is reasonable to propose that competence is a multifaceted characteristic defined in part by some minimum level of knowledge and skill. In this study we examined the relationship between surgical faculty's judgment of clinical competence, as measured by a surgical resident objective structured clinical examination (OSCE), and the residents' objective performance on the skills being tested.
Methods: Fifty-six general surgery residents at all levels of training participated in a 30-station OSCE. At the completion of each station, the faculty proctor made several overall judgments regarding each resident's performance, including a global judgment of competent or not competent. The competence judgment was applied to the objective percentage performance score in three different ways to construct methods for determining competence based solely upon this objective percentage score.
Results: The average mean competent score (MCS) across the stations was 61%, and the average mean noncompetent score (MNCS) was 38%. The difference between MCS and MNCS for each station was very consistent. Upper threshold scores above which a judgment of competent was always made, and lower threshold scores below which a judgment of noncompetent was always made were observed. Overall, the average mean and threshold scores for competent and noncompetent groups were remarkably similar. For performance scores in the range between the threshold competent and noncompetent scores at each station, measures other than objective performance on the skills being evaluated determined the judgment of competent or not competent.
Conclusions: Empirically determined minimum acceptable standards for objective performance in clinical skills and knowledge appeared to have been subconsciously applied to the competence judgment by the faculty evaluators in this study. Other factors appeared to have become determinate when the objective performance score fell within a range of uncertainty.
Copyright 1999 Academic Press.