Light microscopy is a practical tool for advancing biomedical research and diagnostics, offering invaluable insights into the cellular and subcellular structures of living organisms. However, diffraction and optical imperfections actively hinder the attainment of high-quality images. In recent years, there has been a growing interest in applying deep learning techniques to overcome these challenges in light microscopy imaging. Nonetheless, the resulting reconstructions often suffer from undesirable artefacts and hallucinations. Here, we introduce a deep learning-based approach that incorporates the fundamental physics of light propagation in microscopy into the loss function. This model employs a conditioned diffusion model in a physics-informed architecture. To mitigate the issue of limited available data, we utilise synthetic datasets for training purposes. Our results demonstrate consistent enhancements in image quality and substantial reductions in artefacts when compared to state-of-the-art methods. The presented technique is intuitively accessible and allows obtaining higher quality microscopy images for biomedical studies.
© 2024. The Author(s).