Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Cerrato, Mattia"'
Fair Representation Learning (FRL) is a broad set of techniques, mostly based on neural networks, that seeks to learn new representations of data in which sensitive or undesired information has been removed. Methodologically, FRL was pioneered by Ric
Externí odkaz:
http://arxiv.org/abs/2407.03834
Publikováno v:
AAAI, vol. 38, no. 10, pp. 11766-11774, Mar. 2024
Peer learning is a novel high-level reinforcement learning framework for agents learning in groups. While standard reinforcement learning trains an individual agent in trial-and-error fashion, all on its own, peer learning addresses a related setting
Externí odkaz:
http://arxiv.org/abs/2312.09950
The paper surveys automated scientific discovery, from equation discovery and symbolic regression to autonomous discovery systems and agents. It discusses the individual approaches from a "big picture" perspective and in context, but also discusses o
Externí odkaz:
http://arxiv.org/abs/2305.02251
Representation learning algorithms offer the opportunity to learn invariant representations of the input data with regard to nuisance factors. Many authors have leveraged such strategies to learn fair representations, i.e., vectors where information
Externí odkaz:
http://arxiv.org/abs/2208.02656
Autor:
Cerrato, Mattia, Coronel, Alesia Vallenas, Köppel, Marius, Segner, Alexander, Esposito, Roberto, Kramer, Stefan
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information. Various representation deb
Externí odkaz:
http://arxiv.org/abs/2202.03078
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information. Various "representation de
Externí odkaz:
http://arxiv.org/abs/2201.06343
The issue of fairness in machine learning stems from the fact that historical data often displays biases against specific groups of people which have been underprivileged in the recent past, or still are. In this context, one of the possible approach
Externí odkaz:
http://arxiv.org/abs/2201.06336
In this paper we propose a variant of the linear least squares model allowing practitioners to partition the input features into groups of variables that they require to contribute similarly to the final result. The output allows practitioners to ass
Externí odkaz:
http://arxiv.org/abs/2006.16202
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.