Lasater clinical judgment rubric reliability for scoring clinical judgment after observing asynchronous simulation and feasibility/usability with learners

Nurse Educ Today. 2023 Jun:125:105769. doi: 10.1016/j.nedt.2023.105769. Epub 2023 Mar 6.

Abstract

Background: There is strong evidence supporting using the Lasater Clinical Judgment Rubric (LCJR) for scoring learners' clinical judgment during in-person simulation performance and clinical experience reflections. However, a gap exists for using LCJR to evaluate clinical judgment after observing asynchronous simulation.

Objective: We aimed to determine the reliability, feasibility, and usability of LCJR for scoring learners' written reflections after observing expert-modeled asynchronous simulation videos.

Design/setting/participants: We used a one-group, descriptive design and sampled pre-licensure, junior-level bachelor's learners from the Southwestern United States.

Methods: Participants observed eight expert-modeled asynchronous simulation videos over one semester and provided written responses to clinical judgment prompts. We scored clinical judgment using LCJR. We studied reliability by measuring internal consistency of 11 clinical judgment prompts and interrater reliability with two raters. This study also investigated feasibility and usability of the asynchronous simulation learning activity using descriptive statistics. Feasibility included time learners spent completing written responses and time raters spent evaluating written responses. Learners reported usability perceptions using an instructor-developed survey.

Results: Sixty-three learners completed 504 written responses to clinical judgment prompts. Cohen's kappa ranged from 0.34 to 0.86 with a cumulative κ = 0.58. Gwet's AC ranged from 0.48 to 0.90, with a cumulative AC = 0.74. Cronbach's alpha was from 0.51 to 0.72. Learners spent on average 28.32 ± 12.99 min per expert-modeling video observation. Raters spent on average 4.85 ± 1.34 min evaluating written responses for each participant. Learners reported the asynchronous learning activity was usable.

Conclusions: Nurse educators can reliably use LCJR for scoring learners' clinical judgment after observing asynchronous expert-modeled simulation. Logistically, learners complete the reflective learning activity and faculty use LCJR to measure clinical judgment in feasible time. Further, participants perceived the asynchronous learning activity usable. Nurse educators should utilize this learning activity for evaluating and tracking observer clinical judgment development.

Keywords: Asynchronous; Clinical judgment; Expert modeling; Feasibility; Measurement; Nursing; Observer; Reliability; Simulation; Time; Usability; Workload.

MeSH terms

  • Clinical Competence
  • Educational Measurement
  • Feasibility Studies
  • Humans
  • Judgment*
  • Reproducibility of Results
  • Students, Nursing*