Abstrakt: |
The rapid integration of deep learningpowered artificial intelligence systems in diverse applications such as healthcare, credit assessment, employment, and criminal justice has raised concerns about their fairness, particularly in how they handle various demographic groups. This study delves into the existing biases and their ethical implications in deep learning models. It introduces an UnBias approach for assessing bias in different deep neural network architectures and detects instances where bias seeps into the learning process, shifting the model's focus away from the main features. This contributes to the advancement of equitable and trustworthy AI applications in diverse social settings, especially in healthcare. A case study on COVID-19 detection is carried out, involving chest X-ray scan datasets from various publicly accessible repositories and five well-represented and underrepresented gender-based models across four deep-learning architectures: ResNet50V2, DenseNet121, InceptionV3, and Xception. |