Background: There are no valid and reliable tools to assess competency in advanced laparoscopic surgery at a specialist level. The observational clinical human reliability analysis (OCHRA) may have the required characteristics of such a tool. The aim of this study was to evaluate construct and concurrent validity of OCHRA for competency assessment at a specialist level.
Methods: Thirty-two video-recorded laparoscopic colorectal resections, performed by experts and delegates of the National Training Program in England, were evaluated. Each video was analysed using OCHRA by identifying errors enacted during surgery. The number of tissue-handling, instrument-misuse, and consequential errors was recorded using video-rating software. Times spent on dissecting (D) and on exposing (E) tissues were also measured (D/E ratio). In addition, two independent expert surgeons globally assessed each video regarding competency (pass vs. fail). Logistic regression was used to predict outcomes.
Results: A total of 399 errors were identified. There was a significant difference when comparing the expert, pass, and fail groups for total errors (median counts for experts = 4, pass = 10, fail = 17; P < 0.001). When comparing the pass and fail groups excluding experts, differences could be found for tissue-handling errors (7 vs. 12; P = 0.005), but not for consequential errors (4 vs. 7; P = 0.059) and instrument-handling errors (4 vs. 5; P = 0.320). The D/E ratio was significantly lower for delegates than for experts (0.6 vs. 1.0; P = 0.001). When all four independent variables were used to predict delegates who passed or failed, the area under the receiver operating characteristic curve was 0.867, sensitivity was 71.4%, and specificity was 90.9%.
Conclusion: OCHRA is a valid tool for assessing competency at a specialist level in advanced laparoscopic surgery. It has the potential to be used for recertification and revalidation of specialists.