Calibration is one of the main properties that must be accomplished by any predictive model. Overcoming the limitations of many approaches developed so far, a study has recently proposed the calibration belt as a graphical tool to identify ranges of probability where a model based on dichotomous outcomes miscalibrates. In this new approach, the relation between the logits of the probability predicted by a model and of the event rates observed in a sample is represented by a polynomial function, whose coefficients are fitted and its degree is fixed by a series of likelihood-ratio tests. We propose here a test associated with the calibration belt and show how the algorithm to select the polynomial degree affects the distribution of the test statistic. We calculate its exact distribution and confirm its validity via a numerical simulation. Starting from this distribution, we finally reappraise the procedure to construct the calibration belt and illustrate an application in the medical context.
Keywords: calibration test; dichotomous outcome models; goodness-of-fit; logistic regression models.
Copyright © 2014 John Wiley & Sons, Ltd.