It is well documented that studies reporting statistically significant results are more likely to be published than are studies reporting nonsignificant results--a phenomenon called publication bias. Publication bias in meta-analytic reviews should be identified and reduced when possible. Ferguson and Brannick (2012) argued that the inclusion of unpublished articles is ineffective and possibly counterproductive as a means of reducing publication bias in meta-analyses. We show how idiosyncratic choices on the part of Ferguson and Brannick led to an erroneous conclusion. We demonstrate that their key finding--that publication bias was more likely when unpublished studies were included--may be an artifact of the way they assessed publication bias. We also point out how the lack of transparency about key choices and the absence of information about critical features of Ferguson and Brannick's sample and procedures might have obscured readers' ability to assess the validity of their claims. Furthermore, we demonstrate that many of the claims they made are without empirical support, even though they could have tested these claims empirically, and that these claims may be misleading. With their claim that addressing publication bias introduces subjectivity and bias into meta-analysis, they ignored a large body of evidence showing that including unpublished studies that meet the inclusion criteria of a meta-analysis decreases (rather than increases) publication bias. Rather than exclude unpublished studies, we recommend that meta-analysts code study characteristics related to methodological quality (e.g., experimental vs. nonexperimental design) and test whether these factors influence the meta-analytic results.