Advanced Search

Aharon Kalantar - Department of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel; Institute of Agricultural Engineering, Agricultural Research Organization, Volcani Center, Rishon-LeZion, Israel. 

Yael Edan - Department of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel

Amit Gur - Newe Ya'ar, Agricultural Research Organization, Volcani Center, Ramat Ishay, Israel

Iftach Klapp - Institute of Agricultural Engineering, Agricultural Research Organization, Volcani Center, Rishon-LeZion, Israel

 

Generation of yield maps enables making agronomic decisions related to resource management and marketing, leading to improved production and breeding processes. Estimating melon yield production before harvest at single-melon resolution is a labor-intensive task, requiring a detailed account of accumulated yield and general yield distribution, as well as detailed measurements of melon size and location. This study presents an algorithmic pipeline for detection and yield estimation of melons from top-view color images acquired by a digital camera mounted on an unmanned aerial vehicle. The yield estimation provides both the number of melons and the weight of each melon. The system includes three main stages: melon detection, geometric feature extraction, and individual melon yield estimation. The melon-detection process was based on the RetinaNet deep convolutional neural network. Transfer learning was used for the training to detect small objects in high-resolution images successfully. The detection process achieved an average precision score of 0.92 with a F1 score of more than 0.9 in a variety of agricultural environments. For each detected melon, feature extraction was applied using the Chan–Vese active contour algorithm and principal component analysis ellipse-fitting method. A regression model that ties the ellipse features to the melon’s weight is presented. The modified (adjusted) RAdj2 value of the regression model was 0.94. The system results for estimating the weight of a single melon measured by the mean absolute percentage error index achieved 16%. The analysis revealed that this could be decreased to 12% error with more accurate geometrical feature extraction. Overall yield estimation derived by summing the weights of all melons in the field resulted in only a 3% underestimation of the actual total yield.

Powered by ClearMash Solutions Ltd -
Volcani treasures
About
Terms of use
A deep learning system for single and overall weight estimation of melons using unmanned aerial vehicle images
178

Aharon Kalantar - Department of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel; Institute of Agricultural Engineering, Agricultural Research Organization, Volcani Center, Rishon-LeZion, Israel. 

Yael Edan - Department of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel

Amit Gur - Newe Ya'ar, Agricultural Research Organization, Volcani Center, Ramat Ishay, Israel

Iftach Klapp - Institute of Agricultural Engineering, Agricultural Research Organization, Volcani Center, Rishon-LeZion, Israel

 

A deep learning system for single and overall weight estimation of melons using unmanned aerial vehicle images

Generation of yield maps enables making agronomic decisions related to resource management and marketing, leading to improved production and breeding processes. Estimating melon yield production before harvest at single-melon resolution is a labor-intensive task, requiring a detailed account of accumulated yield and general yield distribution, as well as detailed measurements of melon size and location. This study presents an algorithmic pipeline for detection and yield estimation of melons from top-view color images acquired by a digital camera mounted on an unmanned aerial vehicle. The yield estimation provides both the number of melons and the weight of each melon. The system includes three main stages: melon detection, geometric feature extraction, and individual melon yield estimation. The melon-detection process was based on the RetinaNet deep convolutional neural network. Transfer learning was used for the training to detect small objects in high-resolution images successfully. The detection process achieved an average precision score of 0.92 with a F1 score of more than 0.9 in a variety of agricultural environments. For each detected melon, feature extraction was applied using the Chan–Vese active contour algorithm and principal component analysis ellipse-fitting method. A regression model that ties the ellipse features to the melon’s weight is presented. The modified (adjusted) RAdj2 value of the regression model was 0.94. The system results for estimating the weight of a single melon measured by the mean absolute percentage error index achieved 16%. The analysis revealed that this could be decreased to 12% error with more accurate geometrical feature extraction. Overall yield estimation derived by summing the weights of all melons in the field resulted in only a 3% underestimation of the actual total yield.

Scientific Publication
You may also be interested in