The goal of achieving autonomous navigation for agricultural robots poses significant challenges, mostly arising from the substantial natural variations in crop row images as a result of weather conditions and the growth stages of crops. The processing of the detection algorithm also must be significantly low for real-time applications. In order to address the aforementioned requirements, we propose a crop row detection algorithm that has the following features: Firstly, a projective transformation is applied to transform the camera view and a color-based segmentation is employed to distinguish crop and weed from the background. Secondly, a clustering algorithm is used to differentiate between the crop and weed pixels. Lastly, a robust line-fitting approach is implemented to detect crop rows. The proposed algorithm is evaluated throughout a diverse range of scenarios, and its efficacy is assessed in comparison to four distinct existing solutions. The algorithm achieves an overall intersection over union (IOU) of 0.73 and exhibits robustness in challenging scenarios with high weed growth. The experiments conducted on real-time video featuring challenging scenarios show that our proposed algorithm exhibits a detection accuracy of over 90% and is a viable option for real-time implementation. With the high accuracy and low inference time, the proposed methodology offers a viable solution for autonomous navigation of agricultural robots in a crop field without damaging the crop and thus can serve as a foundation for future research.
Keywords: agricultural robot; crop row detection; precision farming; real-time application; unsupervised learning.
Copyright © 2024 Khan, Rahi, Rajendran, Al Hasan and Anwar.