Objective: This study aimed to evaluate the presentation suitability and readability of ChatGPT's responses to common patient questions, as well as its potential to enhance readability. Methods: We initially analyzed 30 ChatGPT responses related to knee osteoarthritis (OA) on March 20, 2023, using readability and presentation suitability metrics. Subsequently, we assessed the impact of detailed and simplified instructions provided to ChatGPT for same responses, focusing on readability improvement. Results: The readability scores for responses related to knee OA significantly exceeded the recommended sixth-grade reading level (p < .001). While the presentation of information was rated as "adequate," the content lacked high-quality, reliable details. After the intervention, readability improved slightly for responses related to knee OA; however, there was no significant difference in readability between the groups receiving detailed versus simplified instructions. Conclusions: Although ChatGPT provides informative responses, they are often difficult to read and lack sufficient quality. Current capabilities do not effectively simplify medical information for the general public. Technological advancements are needed to improve user-friendliness and practical utility.
Keywords: ChatGPT; artificial intelligence; conversational agent; online medical information; readability.