Supporting AI-Explainability by Analyzing Feature Subsets in a Machine Learning Model

Stud Health Technol Inform. 2022 May 25:294:109-113. doi: 10.3233/SHTI220406.

Abstract

Machine learning algorithms become increasingly prevalent in the field of medicine, as they offer the ability to recognize patterns in complex medical data. Especially in this sensitive area, the active usage of a mostly black box is a controversial topic. We aim to highlight how an aggregated and systematic feature analysis of such models can be beneficial in the medical context. For this reason, we introduce a grouped version of the permutation importance analysis for evaluating the influence of entire feature subsets in a machine learning model. In this way, expert-defined subgroups can be evaluated in the decision-making process. Based on these results, new hypotheses can be formulated and examined.

Keywords: explainable AI; grouped variable analysis; permutation importance.

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Machine Learning*