The Corona Virus Disease (COVID-19) has a huge impact on all of humanity, and people's disregard for COVID-19 regulations has sped up the disease's spread. Our study uses a state-of-the-art object detection model like YOLOv4 (You Only Look Once, version 4), a very effective tool, on real-time 25fps, 1920 X 1080 video data streamed live by a camera-mounted Unmanned Aerial Vehicle (UAV) quad-copter to observe proper maintenance of social distance in an area of 35m range in this study. The model has demonstrated remarkable efficacy in identifying and quantifying instances of social distancing, with an accuracy of 82% and little latency. It has been able to work efficiently with real-time streaming at 25-30 ms. Our model is based on CSPDarkNet-53, which was trained on the MS COCO dataset for image classification. It includes additional layers to capture feature maps from different phases. Additionally, the model's neck is made up of PANet, which is used to aggregate the parameters from various CSPDarkNet-53 layers. The CSPDarkNet-53's 53 convolutional layers are followed by 53 more layers in the model head, for a total of 106 completely convolutional layers in the design. This architecture is further integrated with YOLOv3, resulting in the YOLOv4 model, which will be used by our detection model. Furthermore, to differentiate humans The aforementioned method was used to evaluate drone footage and count social distance violations in real time. Our findings show that our model was reliable and successful at detecting social distance violations in real-time with an average accuracy of 82%.
Copyright: © 2024 Arifuzzaman et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.