Background: The process of systematically reviewing research evidence is useful for collecting, assessing and summarizing results from multiple studies planned to answer the same clinical question. The term "systematic" implies that the process, besides being organized and complete, is transparent and fully reported to allow other independent researchers to replicate the results, and therefore come to the same conclusions. Hundreds of new systematic reviews are indexed every year. The growing number increases the likelihood of finding multiple and discordant results.
Objectives: To clarify the impact of multiple and discordant systematic reviews, we designed a program aimed at finding out: (a) how often different systematic reviews are done on the same subject; (b) how often different systematic reviews on the same topic give different results or conclusions; (c) which methods or interpretation characteristics can explain the differences in results or conclusions.
Methods: This paper outlines the method used to explore the frequency and the causes of discordance among multiple systematic reviews on the same topic. These methods were then applied to a few medical fields as case studies.
Conclusion: This aim is particularly relevant for both clinicians and policy makers. Judgments about evidence and recommendation in health care are complex, and often rely on discordant results, especially when there are no empirical results to help serve as a guideline.