Single underwater images often face limitations in field-of-view and visual perception due to scattering and absorption. Numerous image stitching techniques have attempted to provide a wider viewing range, but the resulting stitched images may exhibit unsightly irregular boundaries. Unlike natural landscapes, the absence of reliable high-fidelity references in water complicates the replicability of these deep learning-based methods, leading to unpredictable distortions in cross-domain applications. To address these challenges, we propose an Underwater Wide-field Image Rectangling and Enhancement (UWIRE) framework that incorporates two procedures, i.e., the R-procedure and E-procedure, both of which employ self-coordinated modes, requiring only a single underwater stitched image as input. The R-procedure rectangles the irregular boundaries in stitched images by employing the initial shape resizing and mesh-based image preservation warping. Instead of local linear constraints, we use complementary optimization of boundary-structure-content to ensure a natural appearance with minimal distortion. The E-procedure enhances the rectangled image by employing parameter-adaptive correction to balance information distribution across channels. We further propose an attentive weight-guided fusion method to balance the perception of color restoration, contrast enhancement, and texture sharpening in a complementary manner. Comprehensive experiments demonstrate the superior performance of our UWIRE framework over state-of-the-art image rectangling and enhancement methods, both in quantitative and qualitative evaluation.
Keywords: Complementary mechanism; Content-aware rectangling; Stitched image reconstruction; Underwater image enhancement.
Copyright © 2024 Elsevier Ltd. All rights reserved.