Functional magnetic resonance imaging (fMRI) data are often analyzed using the general linear model employing a hypothesized neural model convolved with a hemodynamic response function. Mismatches between this hemodynamic model and the data can be induced by spatially varying delays or slice-timing differences. It is common practice to desensitize the analysis to such delays by incorporation of the hemodynamic model plus its temporal derivative. The rationale often used is that additional variance will be captured and regressed out from the data. Though this is true, it ignores the potential for amplitude bias induced by small model mismatches due to, for example, variable hemodynamic delays and is not helpful for "random effects" analyses which typically do not account for the first level variance at all. Amplitude bias is due to the use of only the nonderivative portion of the model in the final test for significant amplitudes. We propose instead testing an amplitude value that is a function of both the nonderivative and the derivative terms of the model. Using simulations, we show that the proposed amplitude test does not suffer from delay-induced bias and that a model incorporating temporal derivatives is a more natural test for amplitude differences. The proposed test is applied in a random-effects analysis of 100 subjects. It reveals increased amplitudes in areas consistent with the task, with the largest increases in regions with greater hemodynamic delays.