Training benchmarks based on validated composite scores for the RobotiX robot-assisted surgery simulator on basic tasks

J Robot Surg. 2021 Feb;15(1):69-79. doi: 10.1007/s11701-020-01080-9. Epub 2020 Apr 20.

Abstract

The RobotiX robot-assisted virtual reality simulator aims to aid in the training of novice surgeons outside of the operating room. This study aimed to determine the validity evidence on multiple levels of the RobotiX simulator for basic skills. Participants were divided in either the novice, laparoscopic or robotic experienced group based on their minimally invasive surgical experience. Two basic tasks were performed: wristed manipulation (Task 1) and vessel energy dissection (Task 2). The performance scores and a questionnaire regarding the realism, didactic value, and usability were gathered (content). Composite scores (0-100), pass/fail values, and alternative benchmark scores were calculated. Twenty-seven novices, 21 laparoscopic, and 13 robotic experienced participants were recruited. Content validity evidence was scored positively overall. Statistically significant differences between novices and robotic experienced participants (construct) was found for movements left (Task 1 p = 0.009), movements right (Task 1 p = 0.009, Task 2 p = 0.021), path length left (Task 1 p = 0.020), and time (Task 1 p = 0.040, Task 2 p < 0.001). Composite scores were statistically significantly different between robotic experienced and novice participants for Task 1 (85.5 versus 77.1, p = 0.044) and Task 2 (80.6 versus 64.9, p = 0.001). The pass/fail score with false-positive/false-negative percentage resulted in a value of 75/100, 46/9.1% (Task 1) and 71/100, 39/7.0% (Task 2). Calculated benchmark scores resulted in a minority of novices passing multiple parameters. Validity evidence on multiple levels was assessed for two basic robot-assisted surgical simulation tasks. The calculated benchmark scores can be used for future surgical simulation training.

Keywords: Robot-assisted; Simulation; Surgical education; Validity evidence.

MeSH terms

  • Adult
  • Benchmarking / standards*
  • Clinical Competence / standards*
  • Humans
  • Laparoscopy / education*
  • Laparoscopy / methods
  • Robotic Surgical Procedures / education*
  • Robotic Surgical Procedures / methods
  • Simulation Training / methods*
  • Surgeons / education*
  • Surveys and Questionnaires
  • Task Performance and Analysis*
  • Virtual Reality*