Bowel sounds, a reflection of the gastrointestinal tract's peristalsis, are essential for diagnosing and monitoring gastrointestinal conditions. However, the absence of an effective, non-invasive method for assessing digestion through auscultation has resulted in a reliance on time-consuming and laborious manual analysis by clinicians. This study introduces an innovative deep learning-based method designed to automate and enhance the recognition of bowel sounds. Our approach integrates the Branchformer architecture, which leverages the power of self-attention and convolutional gating for robust feature extraction, with a self-supervised pre-training strategy. Specifically, the Branchformer model employs parallel processing of self-attention and convolutional gated Multi-layer Perceptron branches to capture both global and local dependencies in audio signals, thereby enabling effective characterization of complex bowel sound patterns. Furthermore, a self-supervised pre-training strategy is employed, leveraging a large corpus of unlabeled audio data to learn general sound wave representations, followed by fine-tuning on a limited set of bowel sound data to optimize the model's recognition performance for specific tasks. Experimental results on public bowel sound datasets demonstrate the superior recognition performance of the proposed method compared to existing baseline models, particularly under data-limited conditions, thereby confirming the effectiveness of the self-supervised pre-training strategy. This work provides an efficient and automated solution for clinical bowel sound monitoring, facilitating early diagnosis and treatment of gastrointestinal disorders.
Copyright: © 2024 Yu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.