Deep learning techniques have been successfully applied to automatically segment and quantify cell-types in images acquired from both confocal and light sheet fluorescence microscopy. However, the training of deep learning networks requires a massive amount of manually-labeled training data, which is a very time-consuming operation. In this paper, we demonstrate an adversarial adaptation method to transfer deep network knowledge for microscopy segmentation from one imaging modality (e.g., confocal) to a new imaging modality (e.g., light sheet) for which no or very limited labeled training data is available. Promising segmentation results show that the proposed transfer learning approach is an effective way to rapidly develop segmentation solutions for new imaging methods.
Keywords: Generative adversarial networks; Microscopy segmentation; Transfer learning.