It has been recently asserted that the nested case-control study design, in which case-control sets are sampled from cohort risk sets, can introduce bias ("study design bias") when there are lagged exposures. The bases for this claim include a theoretical and an "empirical evaluation" argument. We examined both of these arguments and found them to be incorrect. We describe an appropriate empirical evaluation method to explore the performance of nested case-control study designs and analysis methods from an existing cohort. This empirical evaluation approach relies on simulating case-control outcomes from risk sets in the cohort from which the case-control study is to be performed. Because it is based on the underlying cohort structure, the empirical evaluation can provide an assessment that is tailored to the specific characteristics of the study under consideration. The methods are illustrated using samples from the Colorado Plateau uranium miners cohort.