At their best, systematic reviews should be the least biased summaries of the effect of healthcare interventions. However, authors can introduce both intended and unintended biases into systematic reviews. Results presented as odds ratios are often misinterpreted by readers as relative risks, meaning that the effect of the intervention is overestimated. Authors may analyse trials separately having mistaken differences in baseline risk for differences in the effect of an intervention and differences in effect between trials that have been analysed together may go undetected. In this article I discuss how a systematic review should work and how it can go wrong.