Zobrazeno 1 - 10
of 416
pro vyhledávání: '"Gordon, Geoffrey"'
Recent advances in reinforcement learning (RL) have predominantly leveraged neural network-based policies for decision-making, yet these models often lack interpretability, posing challenges for stakeholder comprehension and trust. Concept bottleneck
Externí odkaz:
http://arxiv.org/abs/2407.15786
Autor:
Kaul, Shiva, Gordon, Geoffrey J.
[See paper for full abstract] Meta-analysis is a crucial tool for answering scientific questions. It is usually conducted on a relatively small amount of ``trusted'' data -- ideally from randomized, controlled trials -- which allow causal effects to
Externí odkaz:
http://arxiv.org/abs/2407.09387
Successor-style representations have many advantages for reinforcement learning: for example, they can help an agent generalize from past experience to new goals, and they have been proposed as explanations of behavioral and neural data from human an
Externí odkaz:
http://arxiv.org/abs/2103.02650
With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity in prediction accuracy between different demographic subgroups has called for fundamental understanding
Externí odkaz:
http://arxiv.org/abs/2102.12013
Autor:
Zhao, Han, Dan, Chen, Aragam, Bryon, Jaakkola, Tommi S., Gordon, Geoffrey J., Ravikumar, Pradeep
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals
Externí odkaz:
http://arxiv.org/abs/2012.10713
Structured prediction is often approached by training a locally normalized model with maximum likelihood and decoding approximately with beam search. This approach leads to mismatches as, during training, the model is not exposed to its mistakes and
Externí odkaz:
http://arxiv.org/abs/2010.04980
Autor:
Liao, Peiyuan, Zhao, Han, Xu, Keyulu, Jaakkola, Tommi, Gordon, Geoffrey, Jegelka, Stefanie, Salakhutdinov, Ruslan
While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level inf
Externí odkaz:
http://arxiv.org/abs/2009.13504
We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm
Externí odkaz:
http://arxiv.org/abs/1910.07162
Autor:
Negrinho, Renato, Patil, Darshan, Le, Nghia, Ferreira, Daniel, Gormley, Matthew, Gordon, Geoffrey
Neural architecture search methods are able to find high performance deep learning architectures with minimal effort from an expert. However, current systems focus on specific use-cases (e.g. convolutional image classifiers and recurrent language mod
Externí odkaz:
http://arxiv.org/abs/1909.13404
Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis. While most previous works aim to diversify the representations, we explore the complementary direction by performing an adaptiv
Externí odkaz:
http://arxiv.org/abs/1907.06288