The destructive and life-threatening nature of flood events calls for fast and accurate methods to predict dynamic flood behaviour. Data-driven surrogate models have been developed to quickly predict flood inundation, though their accuracy relies on the available flood information for model training and validation. Flood observations are rarely available at high spatial and temporal scales, and thus computationally expensive high-resolution hydrodynamic (high-fidelity) models are often used to generate training data through simulation of selected flood events. Given finite resources, only a limited number of events can be simulated using a high-fidelity model. However, there is no established approach for selecting representative and informative flood events to ensure that the surrogate model is robustly trained. In this study, a novel systematic approach for selecting flood events for the training of surrogate flood inundation models is introduced. The approach generates a large set of candidate events using a computationally efficient low-resolution hydrodynamic (low-fidelity) model and then selects training events based on the simulated spatial-temporal inundation depths of the candidate events. The approach is used to train surrogate models to predict flood inundation in three distinct case studies with different boundary conditions and topographies. The results show robust performance of the surrogate models developed with RMSE<0.23 m when applied to new unseen events, which is similar to the accuracy achieved when using all available candidate events for training. This means the proposed training event selection approach reduces the computational costs of generating training data by up to 97% as fewer high-fidelity model simulations are needed, highlighting the computational advantage of the approach. Although this study focuses on surrogate models for the prediction of flood inundation dynamics, the new approach could easily be used for the development of surrogate models in other fields.
Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.