Introduction: Interpretation and utilization of qualitative feedback from participants has immense value for program evaluation. Reliance on only quantitative data runs the risk of losing the lived patient experience, forcing their outcomes to fit into our predefined objectives.
Objectives: Using large language models (LLMs), program directors may begin to employ rich, qualitative feedback expediently.
Methods: This study provides an example of the feasibility of evaluating patient responses (n = 82) to Empowered Relief, a skill-based pain education class using LLMs. We utilized a dual-method analytical approach, with both LLM-assisted and supported manual thematic review.
Results: The thematic analysis of qualitative data using ChatGPT yielded 7 major themes: (1) Use of Specific Audiofile; (2) Mindset; (3) Technique; (4) Community and Space; (5) Knowledge; (6) Tools and Approaches; and (7) Self-awareness.
Conclusion: Findings from the LLM-derived analysis provided rich and unexpected information, valuable to the program and the field of pain psychology by employing the set of patients' own words to guide program evaluation. Program directors may benefit from evaluating treatment outcomes on a broader scale such as this rather than focusing solely on improvements in disability. These insights would only be uncovered with open-ended data, and although potentially more insights could emerge with the help of a qualitative research team, ChatGPT offered an ergonomic solution.
Keywords: Generative artificial intelligence (GenAI); Large language models (LLM); Pain psychology; Qualitative analysis; Thematic analysis.
Copyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of The International Association for the Study of Pain.