Background: The Personalized Advantage Index (PAI) shows promise as a method for identifying the most effective treatment for individual patients. Previous studies have demonstrated its utility in retrospective evaluations across various settings. In this study, we explored the effect of different methodological choices in predictive modelling underlying the PAI.
Methods: Our approach involved a two-step procedure. First, we conducted a review of prior studies utilizing the PAI, evaluating each study using the Prediction model study Risk Of Bias Assessment Tool (PROBAST). We specifically assessed whether the studies adhered to two standards of predictive modeling: refraining from using leave-one-out cross-validation (LOO CV) and preventing data leakage. Second, we examined the impact of deviating from these methodological standards in real data. We employed both a traditional approach violating these standards and an advanced approach implementing them in two large-scale datasets, PANIC-net (n = 261) and Protect-AD (n = 614).
Results: The PROBAST-rating revealed a substantial risk of bias across studies, primarily due to inappropriate methodological choices. Most studies did not adhere to the examined prediction modeling standards, employing LOO CV and allowing data leakage. The comparison between the traditional and advanced approach revealed that ignoring these standards could systematically overestimate the utility of the PAI.
Conclusion: Our study cautions that violating standards in predictive modeling may strongly influence the evaluation of the PAI's utility, possibly leading to false positive results. To support an unbiased evaluation, crucial for potential clinical application, we provide a low-bias, openly accessible, and meticulously annotated script implementing the PAI.
Keywords: anxiety disorders; cognitive behavioral therapy; machine-learning; personalized advantage index; precision medicine; precision psychotherapy.