Objectives: To investigate the utilisation, reliability and validity of clinical evaluation exercise (CEX) in otolaryngology training.
Design: Retrospective database analysis.
Setting: Online assessment database.
Participants: We analysed all CEXs submitted by north London core (CT) and speciality trainees (ST) in otolaryngology from 2010 to 2013.
Main outcome measures: Internal consistency of the 7 CEX items rated as either O: outstanding, S: satisfactory or D: development required. Overall performance rating (pS) of 1-4 assessed against completion of training level. Receiver operating characteristic was used to describe CEX sensitivity and specificity. Overall score (cS), pS and the number of 'D'-rated items were used to investigate construct validity.
Results: One thousand one hundred and sixty CEXs from 45 trainees were included. CEX showed good internal consistency (Cronbach's alpha= 0.85). CEX was highly sensitive (99%), yet not specific (6%). cS and pS for ST was higher than CT (99.1% ± 0.4 versus 96.6% ± 0.8 and 3.06 ± 0.05 versus 1.92 ± 0.04, respectively P < 0.001). pS showed a significant stepwise increase from CT1 to ST6 (P < 0.001). In contrast, cS only showed improvement up to ST4 (P = 0.025). The most frequently utilised item 'management and follow-up planning' was found to be the best predictor of cS and pS (rs = +0.69 and +0.21, respectively).
Conclusion: CEX is reliable in assessing early years otolaryngology trainees in clinical examination, but not at higher level. It has the potential to be used in a summative capacity in selecting trainees for ST positions. This would also encourage trainees to master all domains of otolaryngology clinical examination by end of CT.
© 2015 John Wiley & Sons Ltd.