How good is ChatGPT at answering patients' questions related to early detection of oral (mouth) cancer?

Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Aug;138(2):269-278. doi: 10.1016/j.oooo.2024.04.010. Epub 2024 Apr 19.

Abstract

Objectives: To examine the quality, reliability, readability, and usefulness of ChatGPT in promoting oral cancer early detection.

Study design: About 108 patient-oriented questions about oral cancer early detection were compiled from expert panels, professional societies, and web-based tools. Questions were categorized into 4 topic domains and ChatGPT 3.5 was asked each question independently. ChatGPT answers were evaluated regarding quality, readability, actionability, and usefulness using. Two experienced reviewers independently assessed each response.

Results: Questions related to clinical appearance constituted 36.1% (n = 39) of the total questions. ChatGPT provided "very useful" responses to the majority of questions (75%; n = 81). The mean Global Quality Score was 4.24 ± 1.3 of 5. The mean reliability score was 23.17 ± 9.87 of 25. The mean understandability score was 76.6% ± 25.9% of 100, while the mean actionability score was 47.3% ± 18.9% of 100. The mean FKS reading ease score was 38.4% ± 29.9%, while the mean SMOG index readability score was 11.65 ± 8.4. No misleading information was identified among ChatGPT responses.

Conclusion: ChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information.

MeSH terms

  • Comprehension
  • Early Detection of Cancer*
  • Humans
  • Internet
  • Mouth Neoplasms* / diagnosis
  • Patient Education as Topic
  • Reproducibility of Results
  • Surveys and Questionnaires