Artificial intelligence (AI) chatbots are becoming a popular source of information but there are limited data on the quality of information on urological malignancies that they provide. Our objective was to characterize the quality of information and detect misinformation about prostate, bladder, kidney, and testicular cancers from four AI chatbots: ChatGPT, Perplexity, Chat Sonic, and Microsoft Bing AI. We used the top five search queries related to prostate, bladder, kidney, and testicular cancers according to Google Trends from January 2021 to January 2023 and input them into the AI chatbots. Responses were evaluated for quality, understandability, actionability, misinformation, and readability using published instruments. AI chatbot responses had moderate to high information quality (median DISCERN score 4 out of 5, range 2-5) and lacked misinformation. Understandability was moderate (median Patient Education Material Assessment Tool for Printable Materials [PEMAT-P] understandability 66.7%, range 44.4-90.9%) and actionability was moderate to poor (median PEMAT-P actionability 40%, range 0-40%The responses were written at a fairly difficult reading level. AI chatbots produce information that is generally accurate and of moderate to high quality in response to the top urological malignancy-related search queries, but the responses lack clear, actionable instructions and exceed the reading level recommended for consumer health information. PATIENT SUMMARY: Artificial intelligence chatbots produce information that is generally accurate and of moderately high quality in response to popular Google searches about urological cancers. However, their responses are fairly difficult to read, are moderately hard to understand, and lack clear instructions for users to act on.
Keywords: Artificial intelligence; Misinformation; Urological malignancies.
Copyright © 2023 European Association of Urology. Published by Elsevier B.V. All rights reserved.