Background: Due to the rapid evolution of generative artificial intelligence (AI) and its implications on patient education, there is a pressing need to evaluate AI responses to patients' medical questions. This study assessed the quality and usability of responses received from two prominent AI platforms to common patient-centric hand and wrist surgery questions. Methods: Twelve commonly encountered hand and wrist surgery patient questions were inputted twice into both Gemini and ChatGPT, generating 48 responses. Each response underwent a content analysis, followed by assessment for quality and usability with three scoring tools: DISCERN, Suitability Assessment of Materials (SAM) and the AI Response Metric (AIRM). Statistical analyses compared the features and scores of the outputs when stratified by platform, question type and response order. Results: Responses earned mean overall scores of 55.7 ('good'), 57.2% ('adequate') and 4.4 for DISCERN, SAM and AIRM, respectively. No responses provided citations. Wrist question responses had significantly higher DISCERN (p < 0.01) and AIRM (p = 0.02) scores compared to hand responses. Second responses had significantly higher AIRM (p < 0.01), but similar DISCERN (p = 0.76) and SAM (p = 0.11), scores compared to the first responses. Gemini's DISCERN (p = 0.04) and SAM (p < 0.01) scores were significantly higher than ChatGPT's corresponding metrics. Conclusions: Although responses are generally 'good' and 'adequate', there is variable quality with respect to platform used, type of question and response order. Given the diversity of publicly available AI platforms, it is important to understand the quality and usability of information patients may encounter during their search for answers to common hand and wrist surgery questions. Level of Evidence: Level IV (Therapeutic).
Keywords: Artificial intelligence; Hand; Patient; Quality; Responses.