Images, features, or feature distributions? A comparison of inputs for training convolutional neural networks to classify lentil and field pea milling fractions
Autor: | Linda S. McDonald, Sahand Assadzadeh, Joe Panozzo |
---|---|
Rok vydání: | 2021 |
Předmět: |
Artificial neural network
business.industry Machine vision Computer science 010401 analytical chemistry Multispectral image ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Process (computing) Soil Science Pattern recognition 04 agricultural and veterinary sciences 01 natural sciences Convolutional neural network Standard deviation 0104 chemical sciences Control and Systems Engineering Feature (computer vision) 040103 agronomy & agriculture 0401 agriculture forestry and fisheries Artificial intelligence business Agronomy and Crop Science Food Science Curse of dimensionality |
Zdroj: | Biosystems Engineering. 208:16-27 |
ISSN: | 1537-5110 |
DOI: | 10.1016/j.biosystemseng.2021.05.011 |
Popis: | Lentil and field pea are each commonly marketed as split and dehulled product. For plant-breeding programmes, the genetic improvement in split-yield is a targeted trait. However, the standard laboratory method for assessment of split-yield requires milled grain to be manually sorted into split and dehulled fractions. This process is time-consuming and impacts the number of germplasm lines that can be evaluated. A machine vision approach, based on artificial neural networks, was proposed to classify split and dehulled fractions from multispectral images of grains. Three neural networks were trained on different inputs derived from the images. The networks were: (1) a convolutional network trained on the full images, (2) a convolutional network trained on distributions of image-features, and (3) a fully connected network trained on mean and standard deviation values of image-features. The accuracy and training times were compared to determine the trade-offs between training networks with smaller inputs for computational efficiency and full-image inputs for accuracy. The networks with reduced input-data dimensionality completed network training and predictions in half the time of the image-based network. The convolutional network based on the distributions of image-features achieved a validation accuracy of 88.1%. On average, this was 1.6% greater than the image-based convolutional network and 4.6% greater than the fully connected network based on simple (mean and standard deviation) features. Feature-distributions extracted from the multispectral images captured the diversity of image data required to differentiate milling categories, leading to gains in computational efficiency over the image-based network without loss of network generality. |
Databáze: | OpenAIRE |
Externí odkaz: |