Objective: This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria.
Data sources: A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms.
Review methods: Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached.
Results: The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology.
Conclusion: Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases.
Level of evidence: NA Laryngoscope, 2024.
Keywords: TRIPOD‐AI; artificial intelligence; head and neck oncology; machine learning.
© 2024 The American Laryngological, Rhinological and Otological Society, Inc.