Strategies such as ensemble learning and averaging techniques try to reduce the variance of single deep neural networks. The focus of this study is on ensemble averaging techniques, fusing the results of differently initialized and trained networks. Thereby, using micrograph cell segmentation as an application example, various ensembles have been initialized and formed during network training, whereby the following methods have been applied: (a) random seeds, (b) L 1-norm pruning, (c) variable numbers of training examples, and (d) a combination of the latter 2 items. Furthermore, different averaging methods are in common use and were evaluated in this study. As averaging methods, the mean, the median, and the location parameter of an alpha-stable distribution, fit to the histograms of class membership probabilities (CMPs), as well as a majority vote of the members of an ensemble were considered. The performance of these methods is demonstrated and evaluated on a micrograph cell segmentation use case, employing a common state-of-the art deep convolutional neural network (DCNN) architecture exploiting the principle of the common VGG-architecture. The study demonstrates that for this data set, the choice of the ensemble averaging method only has a marginal influence on the evaluation metrics (accuracy and Dice coefficient) used to measure the segmentation performance. Nevertheless, for practical applications, a simple and fast estimate of the mean of the distribution is highly competitive with respect to the most sophisticated representation of the CMP distributions by an alpha-stable distribution, and hence seems the most proper ensemble averaging method to be used for this application.
Keywords: Alpha-stable function; Cell segmentation; Combine ensembles; Deep neural networks.
© 2023 The Authors.