Objectives: To propose and test a simple instrument based on seven criteria of study design to distinguish effectiveness (pragmatic) from efficacy (explanatory) trials while conducting systematic reviews.
Design: Currently, no validated definition of effectiveness studies exists. We asked the directors of 12 Evidence-based Practice Centers (EPCs) to select six studies each: four that they considered to be examples of effectiveness trials and two considered efficacy studies. We then applied our proposed criteria to test the construct validity using the selected studies as if they had been identified by a gold standard.
Results: Based on the rationale to identify effectiveness studies reliably with minimal false positives (i.e., a high specificity), a cut-off of six criteria produced the most desirable balance between sensitivity and specificity. This setting produced a specificity of 0.83 and a sensitivity of 0.72.
Conclusions: When applied in a standardized manner, our proposed criteria can provide a valid and simple tool to distinguish effectiveness from efficacy studies. The applicability of systematic reviews can improve when analysts place more emphasis on the generalizability of included studies. In addition, clinicians can also use our criteria to determine the external validity of individual studies given an appropriate population of interest.