| Summary: | Drones have been widely used in precision agriculture to capture high-resolution images of crops, providing farmers with advanced insights into crop health, growth patterns, nutrient deficiencies, and pest infestations. Although several machine and deep learning models have been proposed for plant stress and disease detection, their performance regarding accuracy and computational time still requires improvement, particularly under limited data. Therefore, this paper aims to address these challenges by conducting a comparative analysis of three State-of-the-Art object detection deep learning models: YOLOv8, RetinaNet, and Faster R-CNN, and their variants to identify the model with the best performance. To evaluate the models, the research uses a real-world dataset from potato farms containing images of healthy and stressed plants, with stress resulting from biotic and abiotic factors. The models are evaluated under limited conditions with original data of size 360 images and expanded conditions with augmented data of size 1560 images. The results show that YOLOv8 variants outperform the other models by achieving larger mAP@50 values and lower inference times on both the original and augmented datasets. The YOLOv8 variants achieve mAP@50 ranging from 0.798 to 0.861 and inference times ranging from 11.8 ms to 134.3 ms, while RetinaNet variants achieve mAP@50 ranging from 0.587 to 0.628 and inference times ranging from 118.7 ms to 158.8 ms, and Faster R-CNN variants achieve mAP@50 ranging from 0.587 to 0.628 and inference times ranging from 265 ms to 288 ms. These findings highlight YOLOv8’s robustness, speed, and suitability for real-time aerial crop monitoring, particularly in data-constrained environments.
|