Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Ghodsi, Siamak"'
Conventional fair graph clustering methods face two primary challenges: i) They prioritize balanced clusters at the expense of cluster cohesion by imposing rigid constraints, ii) Existing methods of both individual and group-level fairness in graph p
Externí odkaz:
http://arxiv.org/abs/2402.10756
Autor:
Zhao, Xuan, Fabbrizzi, Simone, Lobo, Paula Reyero, Ghodsi, Siamak, Broelemann, Klaus, Staab, Steffen, Kasneci, Gjergji
The unequal representation of different groups in a sample population can lead to discrimination of minority groups when machine learning models make automated decisions. To address these issues, fairness-aware machine learning jointly optimizes two
Externí odkaz:
http://arxiv.org/abs/2311.12684
Autor:
Ghodsi, Siamak, Ntoutsi, Eirini
Group imbalance, resulting from inadequate or unrepresentative data collection methods, is a primary cause of representation bias in datasets. Representation bias can exist with respect to different groups of one or more protected attributes and migh
Externí odkaz:
http://arxiv.org/abs/2306.01699
With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon. However, an important and often challenging aspect in utilizing such sys
Externí odkaz:
http://arxiv.org/abs/2206.11436
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
In Information Sciences October 2019 502:125-145