In case-control studies of screening to prevent cancer mortality, exposure is ideally defined as screening that takes place within that period prior to diagnosis during which the cancer is potentially detectable using the screening modality under study. This interval has been called the detectable preclinical period (DPP). Misspecifying the duration of the DPP can bias the results of such studies. This article quantifies the impact of incorrectly estimating the duration of the DPP or using the correct average DPP but failing to consider its variability. The authors developed a computer simulation model of disease incidence and mortality with and without screening. The authors then selected cases and controls from the generated population and compared their screening histories. The results indicate that underestimation of the duration of the DPP generally leads to greater bias than does overestimation, but in both instances the extent of the bias is modified by the relative length of the DPP and the average interscreening interval. In practice, the authors recommend that to prevent a falsely low estimate of the effectiveness of a screening test in reducing mortality, a high percentile of the DPP distribution be used when analyzing the results of case-control studies of screening.