Medjool date thinning automation is essential for reducing Medjool production labor and improving fruit quality. Thinning automation requires motion planning based on feature extraction from a segmented fruit bunch and its components. Previous research with focused bunch images attained high success in bunch component segmentation but less success in establishing correct association between the two components (a rachis and spikelets) that form one bunch. The current study presents an algorithm for improved component segmentation and association in the presence of occlusions based on integrating deep neural networks, traditional methods building on bunch geometry, and active vision. Following segmentation with Mask-R-CNN, segmented component images are converted to binary images with a Savitzky–Golay filter and an adapted Otsu threshold. Bunch orientation is calculated based on lines found in the binary image with the Hough transform. The orientation is used for associating a rachis with spikelets. If a suitable rachis is not found, bunch orientation is used for selecting a better viewpoint. The method was tested with two databases of bunches in an orchard, one with focused and one with non-focused images. In all images, the spikelets were correctly identified [intersection over union (IoU) 0.5: F1 0.9]. The average orientation errors were 18.15° (SD 12.77°) and 16.44° (SD 11.07°), respectively, for the focused and non-focused databases. For correct rachis selection, precision was very high when incorporating orientation, and when additionally incorporating active vision recall (and therefore F1) was high (IoU 0.5: orientation: precision 0.94, recall 0.44, F1 0.60; addition of active vision: precision 0.96, recall 0.61, F1 0.74). The developed method leads to highly accurate identification of fruit bunches and their spikelets and rachis, making it suitable for integration with a thinning automation system.
Medjool date thinning automation is essential for reducing Medjool production labor and improving fruit quality. Thinning automation requires motion planning based on feature extraction from a segmented fruit bunch and its components. Previous research with focused bunch images attained high success in bunch component segmentation but less success in establishing correct association between the two components (a rachis and spikelets) that form one bunch. The current study presents an algorithm for improved component segmentation and association in the presence of occlusions based on integrating deep neural networks, traditional methods building on bunch geometry, and active vision. Following segmentation with Mask-R-CNN, segmented component images are converted to binary images with a Savitzky–Golay filter and an adapted Otsu threshold. Bunch orientation is calculated based on lines found in the binary image with the Hough transform. The orientation is used for associating a rachis with spikelets. If a suitable rachis is not found, bunch orientation is used for selecting a better viewpoint. The method was tested with two databases of bunches in an orchard, one with focused and one with non-focused images. In all images, the spikelets were correctly identified [intersection over union (IoU) 0.5: F1 0.9]. The average orientation errors were 18.15° (SD 12.77°) and 16.44° (SD 11.07°), respectively, for the focused and non-focused databases. For correct rachis selection, precision was very high when incorporating orientation, and when additionally incorporating active vision recall (and therefore F1) was high (IoU 0.5: orientation: precision 0.94, recall 0.44, F1 0.60; addition of active vision: precision 0.96, recall 0.61, F1 0.74). The developed method leads to highly accurate identification of fruit bunches and their spikelets and rachis, making it suitable for integration with a thinning automation system.