Importance: Artificial intelligence (AI) tools are rapidly integrating into cancer care. Understanding stakeholder views on ethical issues associated with the implementation of AI in oncology is critical to optimal deployment.
Objective: To evaluate oncologists' views on the ethical domains of the use of AI in clinical care, including familiarity, predictions, explainability (the ability to explain how a result was determined), bias, deference, and responsibilities.
Design, setting, and participants: This cross-sectional, population-based survey study was conducted from November 15, 2022, to July 31, 2023, among 204 US-based oncologists identified using the National Plan & Provider Enumeration System.
Main outcomes and measures: The primary outcome was response to a question asking whether participants agreed or disagreed that patients need to provide informed consent for AI model use during cancer treatment decisions.
Results: Of 387 surveys, 204 were completed (response rate, 52.7%). Participants represented 37 states, 120 (63.7%) identified as male, 128 (62.7%) as non-Hispanic White, and 60 (29.4%) were from academic practices; 95 (46.6%) had received some education on AI use in health care, and 45.3% (92 of 203) reported familiarity with clinical decision models. Most participants (84.8% [173 of 204]) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic; 23.0% (47 of 204) stated they also needed to be explainable by patients. Patient consent for AI model use during treatment decisions was supported by 81.4% of participants (166 of 204). When presented with a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, the most common response was to present both options and let the patient decide (36.8% [75 of 204]); respondents from academic settings were more likely than those from other settings to let the patient decide (OR, 2.56; 95% CI, 1.19-5.51). Most respondents (90.7% [185 of 204]) reported that AI developers were responsible for the medico-legal problems associated with AI use. Some agreed that this responsibility was shared by physicians (47.1% [96 of 204]) or hospitals (43.1% [88 of 204]). Finally, most respondents (76.5% [156 of 204]) agreed that oncologists should protect patients from biased AI tools, but only 27.9% (57 of 204) were confident in their ability to identify poorly representative AI models.
Conclusions and relevance: In this cross-sectional survey study, few oncologists reported that patients needed to understand AI models, but most agreed that patients should consent to their use, and many tasked patients with choosing between physician- and AI-recommended treatment regimens. These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise.