Zobrazeno 1 - 10
of 11 823
pro vyhledávání: '"Anderson James A"'
Representation learning is a powerful tool that enables learning over large multitudes of agents or domains by enforcing that all agents operate on a shared set of learned features. However, many robotics or controls applications that would benefit f
Externí odkaz:
http://arxiv.org/abs/2407.05781
Publikováno v:
Proceedings of the 41st International Conference on Machine Learning, 2024 Learning
We explore a Federated Reinforcement Learning (FRL) problem where $N$ agents collaboratively learn a common policy without sharing their trajectory data. To date, existing FRL work has primarily focused on agents operating in the same or ``similar" e
Externí odkaz:
http://arxiv.org/abs/2405.19499
Autor:
Zhan, Donglin, Anderson, James
Meta-learning methods typically learn tasks under the assumption that all tasks are equally important. However, this assumption is often not valid. In real-world applications, tasks can vary both in their importance during different training stages a
Externí odkaz:
http://arxiv.org/abs/2405.07083
Electricity markets are experiencing a rapid increase in energy storage unit participation. Unlike conventional generation resources, quantifying the competitive operation and identifying if a storage unit is exercising market power is challenging, p
Externí odkaz:
http://arxiv.org/abs/2405.01442
We address the problem of designing an LQR controller in a distributed setting, where M similar but not identical systems share their locally computed policy gradient (PG) estimates with a server that aggregates the estimates and computes a controlle
Externí odkaz:
http://arxiv.org/abs/2404.09061
A graph $G$ is $k$-locally sparse if for each vertex $v \in V(G)$, the subgraph induced by its neighborhood contains at most $k$ edges. Alon, Krivelevich, and Sudakov showed that for $f > 0$ if a graph $G$ of maximum degree $\Delta$ is $\Delta^2/f$-l
Externí odkaz:
http://arxiv.org/abs/2402.19271
Federated reinforcement learning (FRL) has emerged as a promising paradigm for reducing the sample complexity of reinforcement learning tasks by exploiting information from different agents. However, when each agent interacts with a potentially diffe
Externí odkaz:
http://arxiv.org/abs/2401.15273
Autor:
Anderson, James, Chau, Herman, Cho, Eun-Kyung, Crawford, Nicholas, Hartke, Stephen G., Heath, Emily, Henderschedt, Owen, Kwon, Hyemin, Zhang, Zhiyuan
We introduce a new tool useful for greedy coloring, which we call the forb-flex method, and apply it to odd coloring and proper conflict-free coloring of planar graphs. The odd chromatic number, denoted $\chi_{\mathsf{o}}(G)$, is the smallest number
Externí odkaz:
http://arxiv.org/abs/2401.14590
We investigate the problem of learning linear quadratic regulators (LQR) in a multi-task, heterogeneous, and model-free setting. We characterize the stability and personalization guarantees of a policy gradient-based (PG) model-agnostic meta-learning
Externí odkaz:
http://arxiv.org/abs/2401.14534