Background: Optimal cutoff values for tests results involving continuous variables are often derived in a data-driven way. This approach, however, may lead to overly optimistic measures of diagnostic accuracy. We evaluated the magnitude of the bias in sensitivity and specificity associated with data-driven selection of cutoff values and examined potential solutions to reduce this bias.
Methods: Different sample sizes, distributions, and prevalences were used in a simulation study. We compared data-driven estimates of accuracy based on the Youden index with the true values and calculated the median bias. Three alternative approaches (assuming a specific distribution, leave-one-out, smoothed ROC curve) were examined for their ability to reduce this bias.
Results: The magnitude of bias caused by data-driven optimization of cutoff values was inversely related to sample size. If the true values for sensitivity and specificity are both 84%, the estimates in studies with a sample size of 40 will be approximately 90%. If the sample size increases to 200, the estimates will be 86%. The distribution of the test results had little impact on the amount of bias when sample size was held constant. More robust methods of optimizing cutoff values were less prone to bias, but the performance deteriorated if the underlying assumptions were not met.
Conclusions: Data-driven selection of the optimal cutoff value can lead to overly optimistic estimates of sensitivity and specificity, especially in small studies. Alternative methods can reduce this bias, but finding robust estimates for cutoff values and accuracy requires considerable sample sizes.