Zobrazeno 1 - 10
of 23 866
pro vyhledávání: '"group bias"'
Autor:
Annika Kluge, Jonathan Levy
Publikováno v:
Frontiers in Social Psychology, Vol 2 (2024)
One of the most contentious debates in political psychology relates to the existence of ideological (a)symmetry in out-group bias. Recent neuroimaging and psychological studies circumvented previous criticisms regarding the inclusion of ideologically
Externí odkaz:
https://doaj.org/article/a15de870b20544abb2f6d45046a31419
Autor:
Yin, Bingqing1 (AUTHOR) biyin@calpoly.edu, Li, Yexin Jessica2 (AUTHOR)
Publikováno v:
Journal of Advertising. Oct-Dec2023, Vol. 52 Issue 5, p739-755. 17p. 2 Diagrams, 2 Charts, 3 Graphs.
Autor:
Shradha Parashari
Publikováno v:
Caste, Vol 5, Iss 3 (2024)
This article studies the extent of teacher’s in-group bias in occupational expectations and grading on the basis of a student’s caste and socioeconomic status. The article adopts an experimental approach and draws on data generated from 122 teach
Externí odkaz:
https://doaj.org/article/af39a8e36f864e7fb77c31ee30f2c55c
As modern Large Language Models (LLMs) shatter many state-of-the-art benchmarks in a variety of domains, this paper investigates their behavior in the domains of ethics and fairness, focusing on protected group bias. We conduct a two-part study: firs
Externí odkaz:
http://arxiv.org/abs/2403.14727
We explored cultural biases-individualism vs. collectivism-in ChatGPT across three Western languages (i.e., English, German, and French) and three Eastern languages (i.e., Chinese, Japanese, and Korean). When ChatGPT adopted an individualistic person
Externí odkaz:
http://arxiv.org/abs/2402.10436
Autor:
Bowers, Josh
Publikováno v:
Texas Law Review. Jun2024, Vol. 102 Issue 7, p1561-1598. 38p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24)
Federated Learning is emerging as a privacy-preserving model training approach in distributed edge applications. As such, most edge deployments are heterogeneous in nature i.e., their sensing capabilities and environments vary across deployments. Thi
Externí odkaz:
http://arxiv.org/abs/2309.07085
The issue of group fairness in machine learning models, where certain sub-populations or groups are favored over others, has been recognized for some time. While many mitigation strategies have been proposed in centralized learning, many of these met
Externí odkaz:
http://arxiv.org/abs/2305.09931