Simulated misuse of large language models and clinical credit systems

NPJ Digit Med. 2024 Nov 11;7(1):317. doi: 10.1038/s41746-024-01306-2.

Abstract

In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.

Publication types

  • Review