Assessment and Optimization of Explainable Machine Learning Models Applied to Transcriptomic Data

Genomics Proteomics Bioinformatics. 2022 Oct;20(5):899-911. doi: 10.1016/j.gpb.2022.07.003. Epub 2022 Aug 3.

Abstract

Explainable artificial intelligence aims to interpret how machine learning models make decisions, and many model explainers have been developed in the computer vision field. However, understanding of the applicability of these model explainers to biological data is still lacking. In this study, we comprehensively evaluated multiple explainers by interpreting pre-trained models for predicting tissue types from transcriptomic data and by identifying the top contributing genes from each sample with the greatest impacts on model prediction. To improve the reproducibility and interpretability of results generated by model explainers, we proposed a series of optimization strategies for each explainer on two different model architectures of multilayer perceptron (MLP) and convolutional neural network (CNN). We observed three groups of explainer and model architecture combinations with high reproducibility. Group II, which contains three model explainers on aggregated MLP models, identified top contributing genes in different tissues that exhibited tissue-specific manifestation and were potential cancer biomarkers. In summary, our work provides novel insights and guidance for exploring biological mechanisms using explainable machine learning models.

Keywords: Gene expression; Machine learning; Marker gene; Model interpretability; Omics data mining.

MeSH terms

  • Artificial Intelligence*
  • Machine Learning
  • Neural Networks, Computer
  • Reproducibility of Results
  • Transcriptome*