The oil spill is a significant source of marine pollution, causing severe harm to marine ecosystems. Detecting oil spills accurately using synthetic aperture radar (SAR) images is crucial for protecting the environment. However, oil spill targets in SAR images are small and resemble other objects "look-alike". Traditional semantic segmentation networks for MOSD may lose critical information during downsampling Hence, we propose a shape-aware and adaptive strip self-attention guided progressive network (SAGPNet) for MOSD. Firstly, we adopted the progressive strategy to reduce detailed information loss. Second, we improved the traditional U-Net by redesigning its encoder unit. Specifically, we proposed a shape-aware and multi-scale feature extraction module and an adaptive strip self-attention module (ASSAM). These modifications allow the model to extract shape, multi-scale, and global information during the encoding process, addressing the challenges posed by small targets and "look-alike". Third, we utilize the ASSAM to extract global features from the final encoding layer of the earlier stage of the progressive network to guide the encoding features of the subsequent stage, aiming to recognize the overall shape of the oil spill and ensure that the model preserves crucial contextual information, further mitigate the information loss caused by downsampling. Finally, we designed a joint loss to address pixel imbalance between oil spills and other targets. We use three public oil spill detection datasets to evaluate the performance of SAGPNet. The experimental results show superior performance compared to other state-of-the-art methods, highlighting the effectiveness of SAGPNet in addressing the challenges associated with MOSD.
Keywords: Oil spill detection; Progressive network; Semantic segmentation; Small targets.
Copyright © 2024 Elsevier Ltd. All rights reserved.