Integrating Randomness in Large Language Models: A Linear Congruential Generator Approach for Generating Clinically Relevant Content

A Bouras - arXiv preprint arXiv:2407.03582, 2024 - arxiv.org
arXiv preprint arXiv:2407.03582, 2024arxiv.org
Generating diverse, high-quality outputs from language models is crucial for applications in
education and content creation. Achieving true randomness and avoiding repetition remains
a significant challenge. This study uses the Linear Congruential Generator method for
systematic fact selection, combined with AI-powered content generation. We ensured unique
combinations of gastrointestinal physiology and pathology facts across multiple rounds,
integrating these facts into prompts for GPT-4o to create clinically relevant, vignette-style …
Generating diverse, high-quality outputs from language models is crucial for applications in education and content creation. Achieving true randomness and avoiding repetition remains a significant challenge. This study uses the Linear Congruential Generator method for systematic fact selection, combined with AI-powered content generation. We ensured unique combinations of gastrointestinal physiology and pathology facts across multiple rounds, integrating these facts into prompts for GPT-4o to create clinically relevant, vignette-style outputs. Over 14 rounds, 98 unique outputs were generated, demonstrating LCG's effectiveness in producing diverse and high-quality content. This method addresses key issues of randomness and repetition, enhancing the quality and efficiency of language model-generated content for various applications.
arxiv.org