Assessing change in clinical teaching skills: are we up for the challenge?

Teach Learn Med. 2008 Oct-Dec;20(4):288-94. doi: 10.1080/10401330802384177.

Abstract

Background: The faculty development community has been challenged to more rigorously assess program impact and move beyond traditional outcomes of knowledge tests and self ratings.

Purpose: The purpose was to (a) assess our ability to measure supervisors' feedback skills as demonstrated in a clinical setting and (b) compare the results with traditional outcome measures of faculty development interventions.

Methods: A pre-post study design was used. Resident and expert ratings of supervisors' demonstrated feedback skills were compared with traditional outcomes, including a knowledge test and participant self-evaluation.

Results: Pre-post knowledge increased significantly (pre = 61%, post = 85%; p < .001) as did participant's self-evaluation scores (pre = 4.13, post = 4.79; p < .001). Participants' self-evaluations were moderately to poorly correlated with resident (pre r = .20, post r = .08) and expert ratings (pre r = .43, post r = -.52). Residents and experts would need to evaluate 110 and 200 participants, respectively, to reach significance.

Conclusions: It is possible to measure feedback skills in a clinical setting. Although traditional outcome measures show a significant effect, demonstrating change in teaching behaviors used in practice will require larger scale studies than typically undertaken currently.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Education, Medical, Graduate / standards*
  • Educational Measurement
  • Feedback*
  • Humans
  • Program Evaluation
  • Research Design
  • Staff Development*
  • Teaching / standards*