There is a dearth of validated instruments for assessing clinical teaching in veterinary education. This study describes the development and validation of a veterinary-adapted Stanford Faculty Development Program 26 (SFDP-Vet22) instrument for student evaluation of veterinary clinical educators. Validity evidence was gathered in three specific categories: (a) content, (b) response process, and (c) internal structure. Content validity was supported by the educational theory and research underlying the Stanford Faculty Development Program 26 (SFDP-26) instrument. The process of adapting the SFDP-26 to the veterinary clinical education setting and piloting the SFDP-Vet22 supported validity in the response process, but straightlining indicated that some students ([Formula: see text]) did not use the instrument as intended. Validity in internal structure was supported by the result of exploratory factor analysis with a six-factor solution. This was performed using principal axis factoring extraction and direct oblimin oblique rotation ([Formula: see text]) on Box-Cox-transformed data. Twenty of the 22 items loaded in the predicted factors. Cronbach's alphas for each factor were above .846, mean inter-item correlations ranged from .594 to .794, and mean item-total correlations ranged from .693 to .854. The six-factor solution explained 75.5% of the variation, indicating a robust model. The results indicated that the control of session, communication of goals, and self-directed learning factors were stable and consistently loaded as predicted and that learning climate, evaluation, and feedback were unstable. This suggests the transference of these constructs from medical to veterinary education and supports the intended use: low-stakes decisions about clinical educator performance and identifying areas of potential growth of educators.
Keywords: evaluation of teaching; student evaluation of teaching; veterinary clinical teaching.