Autor: |
Divyanth LG; Department of Biological Systems Engineering, Washington State University, Pullman, WA 99164, USA.; Department of Agricultural and Food Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India., Marzougui A; Department of Biological Systems Engineering, Washington State University, Pullman, WA 99164, USA., González-Bernal MJ; The Institute for Sustainable Agriculture, Spanish National Research Council, 14001 Cordova, Spain., McGee RJ; Grain Legume Genetics and Physiology Research Unit, US Department of Agriculture-Agricultural Research Service (USDA-ARS), Pullman, WA 99164, USA., Rubiales D; The Institute for Sustainable Agriculture, Spanish National Research Council, 14001 Cordova, Spain., Sankaran S; Department of Biological Systems Engineering, Washington State University, Pullman, WA 99164, USA. |
Abstrakt: |
Aphanomyces root rot (ARR) is a devastating disease that affects the production of pea. The plants are prone to infection at any growth stage, and there are no chemical or cultural controls. Thus, the development of resistant pea cultivars is important. Phenomics technologies to support the selection of resistant cultivars through phenotyping can be valuable. One such approach is to couple imaging technologies with deep learning algorithms that are considered efficient for the assessment of disease resistance across a large number of plant genotypes. In this study, the resistance to ARR was evaluated through a CNN-based assessment of pea root images. The proposed model, DeepARRNet, was designed to classify the pea root images into three classes based on ARR severity scores, namely, resistant, intermediate, and susceptible classes. The dataset consisted of 1581 pea root images with a skewed distribution. Hence, three effective data-balancing techniques were identified to solve the prevalent problem of unbalanced datasets. Random oversampling with image transformations, generative adversarial network (GAN)-based image synthesis, and loss function with class-weighted ratio were implemented during the training process. The result indicated that the classification F1-score was 0.92 ± 0.03 when GAN-synthesized images were added, 0.91 ± 0.04 for random resampling, and 0.88 ± 0.05 when class-weighted loss function was implemented, which was higher than when an unbalanced dataset without these techniques were used (0.83 ± 0.03). The systematic approaches evaluated in this study can be applied to other image-based phenotyping datasets, which can aid the development of deep-learning models with improved performance. |