Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Nakao, Yuri"'
Numerous fairness metrics have been proposed and employed by artificial intelligence (AI) experts to quantitatively measure bias and define fairness in AI models. Recognizing the need to accommodate stakeholders' diverse fairness understandings, effo
Externí odkaz:
http://arxiv.org/abs/2407.11442
Autor:
Nakao, Yuri
Achieving people's well-being with AI systems requires that each user is guided to a healthier lifestyle in a way that is appropriate for her or him. Although well-being has diverse definitions~\cite{calvo2014positive}, leading a healthy lifestyle is
Externí odkaz:
http://arxiv.org/abs/2407.02381
Fairness is a growing concern for high-risk decision-making using Artificial Intelligence (AI) but ensuring it through purely technical means is challenging: there is no universally accepted fairness measure, fairness is context-dependent, and there
Externí odkaz:
http://arxiv.org/abs/2312.08064
Autor:
Nakao, Yuri, Yokota, Takuya
Due to the opacity of machine learning technology, there is a need for explainability and fairness in the decision support systems used in public or private organizations. Although the criteria for appropriate explanations and fair decisions change d
Externí odkaz:
http://arxiv.org/abs/2308.01163
While AI technology is becoming increasingly prevalent in our daily lives, the comprehension of machine learning (ML) among non-experts remains limited. Interactive machine learning (IML) has the potential to serve as a tool for end users, but many e
Externí odkaz:
http://arxiv.org/abs/2305.05846
Autor:
Nakao, Yuri, Strappelli, Lorenzo, Stumpf, Simone, Naseer, Aisha, Regoli, Daniele, Del Gamba, Giulia
Publikováno v:
International Journal of Human-Computer Interaction, 2022
With Artificial intelligence (AI) to aid or automate decision-making advancing rapidly, a particular concern is its fairness. In order to create reliable, safe and trustworthy systems through human-centred artificial intelligence (HCAI) design, recen
Externí odkaz:
http://arxiv.org/abs/2206.00474
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in ma
Externí odkaz:
http://arxiv.org/abs/2204.10464
Autor:
Kobayashi, Kenji, Nakao, Yuri
With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate the bias have been proposed. However, most of them have not considered int
Externí odkaz:
http://arxiv.org/abs/2010.13494
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.