Objective: To assess the support of large language models (LLMs) in generating clearer and more personalized medication instructions to enhance e-prescription.
Patients and methods: We established patient-centered guidelines for adequate, acceptable, and personalized directions to enhance e-prescription. A dataset comprising 104 outpatient scenarios, with an array of medications, administration routes, and patient conditions, was developed following the Brazilian national e-prescribing standard. Three prompts were submitted to a closed-source LLM. The first prompt involved a generic command, the second one was calibrated for content enhancement and personalization, and the third one requested bias mitigation. The third prompt was submitted to an open-source LLM. Outputs were assessed using automated metrics and human evaluation. We conducted the study between March 1, 2024 and September 10, 2024.
Results: Adequacy scores of our closed-source LLM's output showed the third prompt outperforming the first and second one. Full and partial acceptability was achieved in 94.3% of texts with the third prompt. Personalization was rated highly, especially with the second and third prompts. The 2 LLMs showed similar adequacy results. Lack of scientific evidence and factual errors were infrequent and unrelated to a particular prompt or LLM. The frequency of hallucinations was different for each LLM and concerned prescriptions issued upon symptom manifestation and medications requiring dosage adjustment or involving intermittent use. Gender bias was found in our closed-source LLM's output for the first and second prompts, with the third one being bias-free. The second LLM's output was bias-free.
Conclusion: This study demonstrates the potential of LLM-supported generation to produce prescription directions and improve communication between health professionals and patients within the e-prescribing system.
© 2024 The Authors.