Objective: Letters of reference (LORs) play an important role in postgraduate residency applications. Human-written LORs have been shown to carry implicit gender bias, such as using more agentic versus communal words for men, and more frequent doubt-raisers and references to appearance and personal life for women. This can result in inequitable access to residency opportunities for women. Given the known gendered language often unconsciously inserted into human-written LORs, we sought to identify whether LORs generated by artificial intelligence exhibit gender bias.
Study design: Observational study.
Setting: Multicenter academic collaboration.
Methods: Prompts describing identical men and women applying for Otolaryngology residency positions were created and provided to ChatGPT to generate LORs. These letters were analyzed using a gender-bias calculator which assesses the proportion of male- versus female-associated words.
Results: Regardless of the gender, school, research, or other activities, all LORs generated by ChatGPT showed a bias toward male-associated words. There was no significant difference between the percentage of male-biased words in letters written for women versus men (39.15 vs 37.85, P = .77). There were significant differences in gender bias found by each of the other discrete variables (school, research, and other activities) chosen.
Conclusion: While ChatGPT-generated LORs all showed a male bias in the language used, there was no gender bias difference in letters produced using traditionally masculine versus feminine names and pronouns. Other variables did induce gendered language, however. ChatGPT is a promising tool for LOR drafting, but users must be aware of potential biases introduced or propagated through these technologies.
Keywords: Otolaryngology–Head and Neck Surgery; artificial intelligence; gender bias.
© 2024 The Authors. Otolaryngology–Head and Neck Surgery published by Wiley Periodicals LLC on behalf of American Academy of Otolaryngology–Head and Neck Surgery Foundation.