A dataset for evaluating clinical research claims in large language models

Sci Data. 2025 Jan 16;12(1):86. doi: 10.1038/s41597-025-04417-x.

Abstract

Large language models (LLMs) have the potential to enhance the verification of health claims. However, issues with hallucination and comprehension of logical statements require these models to be closely scrutinized in healthcare applications. We introduce CliniFact, a scientific claim dataset created from hypothesis testing results in clinical research, covering 992 unique interventions for 22 disease categories. The dataset used study arms and interventions, primary outcome measures, and results from clinical trials to derive and label clinical research claims. These claims were then linked to supporting information describing clinical trial results in scientific publications. CliniFact contains 1,970 instances from 992 unique clinical trials related to 1,540 unique publications. When evaluating LLMs against CliniFact, discriminative models, such as BioBERT with an accuracy of 80.2%, outperformed generative counterparts, such as Llama3-70B, which reached 53.6% accuracy (p-value < 0.001). Our results demonstrate the potential of CliniFact as a benchmark for evaluating LLM performance in clinical research claim verification.

Publication types

  • Dataset

MeSH terms

  • Biomedical Research*
  • Clinical Trials as Topic
  • Humans
  • Language