Machine learning models in trusted research environments - understanding operational risks

Int J Popul Data Sci. 2023 Dec 14;8(1):2165. doi: 10.23889/ijpds.v8i1.2165. eCollection 2023.

Abstract

Introduction: Trusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.

Objectives: As part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them.

Methods: We demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling.

Results: We show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs.

Conclusions: At this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.

Keywords: artificial intelligence; confidentiality; data enclave; machine learning; output checking; trusted research environment.

MeSH terms

  • Disclosure*
  • Machine Learning*