It is well known that maximizing the maximum LOD score over multiple parameter values or models (i.e., the method of mod scores, or MMLS), will inflate type I error, compared with assuming only one parameter value/model in the linkage analysis. On the other hand, a mod score often has greater power to detect linkage than does a LOD score (Z) calculated under a wrong genetic model. Therefore, it is of interest to determine the actual magnitude of type I error in realistic genetic situations. Simulated data sets with no linkage were generated under three dominant and three recessive single-locus models, with reduced penetrance (f = .8, .5, and .2). Data sets were analyzed for linkage by (1) maximizing over penetrance only, (2) maximizing over "dominance model" (i.e., dominant versus recessive), and (3) maximizing over both penetrance and dominance model simultaneously. In (1), the resultant significance levels were approximately doubled, compared with baseline values if one had not maximized over penetrances (i.e., compared with a one-sided chi2(1)). In (2), significance levels were increased somewhat less, and, in (3), they were increased by approximately two to three times (but not more than four times) over those of the one-sided chi2(1). This means that, for a given size of test alpha, an investigator would need to increase the Z used as a test criterion, by approximately 0.30 LOD units for analyses as in (1) or (2) and by 0.60 Z units for analyses as in (3). These guidelines, which are valid up to approximately Z = 3.0, are conservative for (1) and are very conservative for (2) and (3). By quantifying the increase in significance level (or, correspondingly, the increase in Z), our findings will enable users to rationally assess the advantages versus the disadvantages of mod scores.