Objective: The aim was to develop, test, and apply an index to assess the completeness of reporting in a cohort of observational studies of conference abstracts.
Study design and setting: Using rigorous methods, we reduced 245 items generated by literature review to 48 candidate items. In a random sample of 30 conference abstracts of rituximab for nonHodgkin lymphoma, we developed an item impact score using a survey of abstract stakeholders combined with the prevalence of each of the 48 items. We retained 14 independent items representing completeness of reporting, the CORE-14. Two raters determined the reliability of the instrument. We then applied the CORE-14 in another 78 studies to determine the prevalence of each feature.
Results: Our survey response rate was 83.9% (47/56). Interrater reliability (95% CI) of the CORE-14 instrument was 0.56 (0.25, 0.77), which improved by averaging across scores provided by two raters (0.72 [0.49, 0.86]). Applying the CORE-14 in an additional set of 78 abstracts, six items occurred > or =85% and four items occurred < or =40% of the time.
Conclusion: Opportunities to improve conference abstract reporting exist. This scale could guide future conference abstract submissions and aid individuals considering conference abstract data to inform clinical practice, systematic reviews, guidelines, or policy.