Popis: |
Computer vision (CV) application is becoming a crucial factor for the growth of developed economies in the world. The widespread use of CV applications has created a growing demand for accurate models. Therefore, the subfields of CV focus on improving existing models and developing new methods and algorithms to meet the demands of different sectors. Simultaneously, research on ensemble learning provides effective tools for increasing the accuracies of the models. Nevertheless, there is a significant gap in research using data representation and model features. This led us to develop KeepNMax—an ensemble of image channels and epochs using the top N maximum prediction probabilities at the final step. Using KeepNMax, the ensemble error was reduced and increased the amount of data knowledge of the ensemble model. Nine datasets were trained. As long as each dataset had three channels, the images were divided into three different channels and trained them separately using the same model architecture. In addition, the datasets were trained without dividing them into different channels using the same model architecture. After completing the training, some epochs of the training were ensembled to the best epoch in the training. In addition, two different model architectures were used to check the model dependency of the proposed method and achieved remarkable results in both cases. This method was proposed for deep-learning classification models. Despite its simplicity, proposed method improved the results of the CNN and ConvMixer models for the datasets used. Classic training, bootstrap aggregation, and random split methods were used as the baseline methods. For most datasets, significant results were obtained using KeepNMax. The success of the method was explained by the unique true prediction ( $UTP$ ) scope of each model. By ensembling the models, the prediction scope of the model was enlarged, allowing it to represent broader knowledge about datasets than a simple model. |