Autor: |
Russell, Ingrid, Islam, Sheikh Rabiul, Eberle, William, Talbert, Douglas, Hasan, Md Golam Moula Mehedi |
Předmět: |
|
Zdroj: |
International Journal on Artificial Intelligence Tools; May2024, Vol. 33 Issue 3, p1-3, 3p |
Abstrakt: |
This document serves as a preface to a special issue of the International Journal on Artificial Intelligence Tools (IJAIT). The articles in this issue build upon papers presented at the 36th International Florida AI Research Society Conferences in May 2023. The lead article introduces the theme of the special issue, which is explainable, fair, and Trustworthy AI. Other articles in the issue cover topics such as improving the bounds of a neuron's Boolean function, ethical explanations and transparency in AI systems, fairness measurement and mitigation in deep learning systems, and the dynamics of human-agent collaboration in ad hoc teams. The paper emphasizes the importance of positive and negative contrastive explanations in enhancing motivation and performance. It also addresses the challenges of fairness and reliability in predictive policing systems, with a focus on reducing bias related to race, age, and sex. The authors propose tailoring predictive models to specific causal queries and utilizing causal structure instead of relying solely on pre-trained models. The paper also explores how machine learning models can balance accuracy and understandability, as well as ensure fairness. The findings indicate that more complex models tend to outperform current guidelines, but fair models have lower complexity. The paper concludes by expressing gratitude to the reviewers for their contributions and support. [Extracted from the article] |
Databáze: |
Complementary Index |
Externí odkaz: |
|