Background: Patient-reported outcomes (PROs) assess health, disease, and treatment from the patient perspective. The large number of PRO questionnaires and lack of standardization in scoring and scaling make it difficult for patients and clinicians to interpret PRO scores for use in practice.
Objectives: We investigated PRO score presentation approaches to promote patient and clinician understanding and use. We addressed (1) individual patients' PRO scores for monitoring and management (individual-level) and (2) PRO results from research studies comparing treatment options (group-level) data. Like previous research, we conducted the study in a cancer treatment setting.
Methods: We conducted a 3-part mixed-methods study. In Part 1, we conducted in-person semistructured interviews with 50 survivors and 20 clinicians to assess aspects of current data display formats to determine whether they were helpful/confusing. In Part 2, work groups composed of Part 1 participant volunteers partnered with the research team to develop improved data presentation formats, which were then preliminarily evaluated via in-person interviews with 39 survivors and 40 clinicians. Part 3 tested the formats that emerged from Part 2 using a broad-based online survey of cancer survivors (n = 1256), cancer clinicians (n = 608), and PRO researchers (not cancer specific) (n = 747) recruited via email lists of stakeholder groups and snowball sampling, plus in-person interviews with 20 survivors and 25 clinicians. Across Parts 1 through 3, we recruited in-person interviewees from a mid-Atlantic consortium of academic and community health systems. We purposively sampled survivors based on education, cancer type, and clinical setting; we selected clinicians based on specialty and clinical setting. A 9-member Stakeholder Advisory Board informed all aspects of study design, conduct, and reporting.
Results: The Part 1 findings supported presenting line graphs of scores over time for individual-level data; the group-level data findings suggested that clinicians value statistical information (eg, P values, confidence limits), but patients find this information confusing. Therefore, in Parts 2 through 3, we addressed group-level data presentation to patients separate from clinicians. Part 2 identified formats to test in Part 3. In Part 3, for individual-level data, interpretation accuracy and clarity ratings were better for line graphs, with higher scores always indicating better outcomes vs higher scores indicating “more” of the outcome (better for function, worse for symptoms). Clarity ratings and overall preferences supported using a threshold line to indicate possibly concerning scores. For presentation of group-level data to patients, interpretation accuracy, clarity ratings, and overall preferences supported presenting proportions using pie charts (vs bar graphs or icon arrays); interpretation accuracy supported “better” line graphs (compared with “more” or normed line graphs); clarity ratings supported “better” vs “more” line graphs. For presentation of group-level data to clinicians, interpretation accuracy and clarity did not differ significantly between pie charts and bar graphs. Interpretation accuracy and clarity ratings supported better over normed line graphs (no difference between “better” and “more”). Clarity ratings supported including some indication of significant differences between groups (eg, asterisk).
Conclusions: Interpretation accuracy, clarity ratings, and preferences differ among PRO presentation formats. We will now use these results to conduct a modified-Delphi consensus process to develop recommendations for PRO data presentation.
Copyright © 2018. Johns Hopkins University. All Rights Reserved.