Autor: |
Lamia Alam, Nasser Kehtarnavaz |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Sensors, Vol 24, Iss 3, p 738 (2024) |
Druh dokumentu: |
article |
ISSN: |
1424-8220 |
DOI: |
10.3390/s24030738 |
Popis: |
This paper discusses the problem of recognizing defective epoxy drop images for the purpose of performing vision-based die attachment inspection in integrated circuit (IC) manufacturing based on deep neural networks. Two supervised and two unsupervised recognition models are considered. The supervised models examined are an autoencoder (AE) network together with a multi-layer perceptron network (MLP) and a VGG16 network, while the unsupervised models examined are an autoencoder (AE) network together with k-means clustering and a VGG16 network together with k-means clustering. Since in practice very few defective epoxy drop images are available on an actual IC production line, the emphasis in this paper is placed on the impact of data augmentation on the recognition outcome. The data augmentation is achieved by generating synthesized defective epoxy drop images via our previously developed enhanced loss function CycleGAN generative network. The experimental results indicate that when using data augmentation, the supervised and unsupervised models of VGG16 generate perfect or near perfect accuracies for recognition of defective epoxy drop images for the dataset examined. More specifically, for the supervised models of AE+MLP and VGG16, the recognition accuracy is improved by 47% and 1%, respectively, and for the unsupervised models of AE+Kmeans and VGG+Kmeans, the recognition accuracy is improved by 37% and 15%, respectively, due to the data augmentation. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|