In intelligent transportation systems, accurate vehicle target recognition within road scenarios is crucial for achieving intelligent traffic management. Addressing the challenges posed by complex environments and severe vehicle occlusion in such scenarios, this paper proposes a novel vehicle-detection method, YOLO-BOS. First, to bolster the feature-extraction capabilities of the backbone network, we propose a novel Bi-level Routing Spatial Attention (BRSA) mechanism, which selectively filters features based on task requirements and adjusts the importance of spatial locations to more accurately enhance relevant features. Second, we incorporate Omni-directional Dynamic Convolution (ODConv) into the head network, which is capable of simultaneously learning complementary attention across the four dimensions of the kernel space, therefore facilitating the capture of multifaceted features from the input data. Lastly, we introduce Shape-IOU, a new loss function that significantly enhances the accuracy and robustness of detection results for vehicles of varying sizes. Experimental evaluations conducted on the UA-DETRAC dataset demonstrate that our model achieves improvements of 4.7 and 4.4 percentage points in [email protected] and [email protected]:0.95, respectively, compared to the baseline model. Furthermore, comparative experiments on the SODA10M dataset corroborate the superiority of our method in terms of precision and accuracy.
Keywords: BRSA; ODConv; Shape-IOU; YOLO-BOS; vehicle detection.