Importance: Effect sizes and confidence intervals (CIs) are critical for the interpretation of the results for any outcome of interest.
Objective: To evaluate the frequency of reporting effect sizes and CIs in the results of analytical studies.
Design, setting, and participants: Descriptive review of analytical studies published from January 2012 to December 2015 in JAMA Otolaryngology-Head & Neck Surgery.
Methods: A random sample of 121 articles was reviewed in this study. Descriptive studies were excluded from the analysis. Seven independent reviewers participated in the evaluation of the articles, with 2 reviewers assigned per article. The review process was standardized for each article; the Methods and Results sections were reviewed for the outcomes of interest. Descriptive statistics for each outcome were calculated and reported accordingly.
Main outcomes and measures: Primary outcomes of interest included the presence of effect size and associated CIs. Secondary outcomes of interest included a priori descriptions of statistical methodology, power analysis, and expectation of effect size.
Results: There were 107 articles included for analysis. The majority of the articles were retrospective cohort studies (n = 36 [36%]) followed by cross-sectional studies (n = 18 [17%]). A total of 58 articles (55%) reported an effect size for an outcome of interest. The most common effect size used was difference of mean, followed by odds ratio and correlation coefficient, which were reported 17 (16%), 15 (13%), and 12 times (11%), respectively. Confidence intervals were associated with 29 of these effect sizes (27%), and 9 of these articles (8%) included interpretation of the CI. A description of the statistical methodology was provided in 97 articles (91%), while 5 (5%) provided an a priori power analysis and 8 (7%) provided a description of expected effect size finding.
Conclusions and relevance: Improving results reporting is necessary to enhance the reader's ability to interpret the results of any given study. This can only be achieved through increasing the reporting of effect sizes and CIs rather than relying on P values for both statistical significance and clinically meaningful results.