Zobrazeno 1 - 10
of 137
pro vyhledávání: '"Gordon, Geoffrey J."'
Recent advances in reinforcement learning (RL) have predominantly leveraged neural network-based policies for decision-making, yet these models often lack interpretability, posing challenges for stakeholder comprehension and trust. Concept bottleneck
Externí odkaz:
http://arxiv.org/abs/2407.15786
Autor:
Kaul, Shiva, Gordon, Geoffrey J.
[See paper for full abstract] Meta-analysis is a crucial tool for answering scientific questions. It is usually conducted on a relatively small amount of ``trusted'' data -- ideally from randomized, controlled trials -- which allow causal effects to
Externí odkaz:
http://arxiv.org/abs/2407.09387
Successor-style representations have many advantages for reinforcement learning: for example, they can help an agent generalize from past experience to new goals, and they have been proposed as explanations of behavioral and neural data from human an
Externí odkaz:
http://arxiv.org/abs/2103.02650
With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity in prediction accuracy between different demographic subgroups has called for fundamental understanding
Externí odkaz:
http://arxiv.org/abs/2102.12013
Autor:
Zhao, Han, Dan, Chen, Aragam, Bryon, Jaakkola, Tommi S., Gordon, Geoffrey J., Ravikumar, Pradeep
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals
Externí odkaz:
http://arxiv.org/abs/2012.10713
Structured prediction is often approached by training a locally normalized model with maximum likelihood and decoding approximately with beam search. This approach leads to mismatches as, during training, the model is not exposed to its mistakes and
Externí odkaz:
http://arxiv.org/abs/2010.04980
We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm
Externí odkaz:
http://arxiv.org/abs/1910.07162
Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis. While most previous works aim to diversify the representations, we explore the complementary direction by performing an adaptiv
Externí odkaz:
http://arxiv.org/abs/1907.06288
Autor:
Zhao, Han, Gordon, Geoffrey J.
Real-world applications of machine learning tools in high-stakes domains are often regulated to be fair, in the sense that the predicted target should satisfy some quantitative notion of parity with respect to a protected attribute. However, the exac
Externí odkaz:
http://arxiv.org/abs/1906.08386
Crowdsourced data used in machine learning services might carry sensitive information about attributes that users do not want to share. Various methods have been proposed to minimize the potential information leakage of sensitive attributes while max
Externí odkaz:
http://arxiv.org/abs/1906.07902