Zobrazeno 1 - 10
of 201
pro vyhledávání: '"Thomas, Garrett"'
Autor:
Thomas, Garrett, Cheng, Ching-An, Loynd, Ricky, Frujeri, Felipe Vieira, Vineet, Vibhav, Jalobeanu, Mihai, Kolobov, Andrey
A rich representation is key to general robotic manipulation, but existing approaches to representation learning require large amounts of multimodal demonstrations. In this work we propose PLEX, a transformer-based architecture that learns from a sma
Externí odkaz:
http://arxiv.org/abs/2303.08789
Safe reinforcement learning is a promising path toward applying reinforcement learning algorithms to real-world problems, where suboptimal behaviors may lead to actual negative consequences. In this work, we focus on the setting where unsafe states c
Externí odkaz:
http://arxiv.org/abs/2202.07789
Autor:
Mehdi Sadighi, PhD, Danielle Kara, PhD, Dingheng Mai, Shi Chen, BSc, Thomas Garrett, BSc, Christopher Nguyen, PhD, Deborah Kwon, MD, FSCMR
Publikováno v:
Journal of Cardiovascular Magnetic Resonance, Vol 26, Iss , Pp 100177- (2024)
Externí odkaz:
https://doaj.org/article/2e4e03036c324d60959b9e16f25ca4d3
Autor:
Shi Chen, BSc, Danielle Kara, PhD, Thomas Garrett, BSc, Deborah Kwon, MD, FSCMR, Christopher Nguyen, PhD, FSCMR
Publikováno v:
Journal of Cardiovascular Magnetic Resonance, Vol 26, Iss , Pp 100255- (2024)
Externí odkaz:
https://doaj.org/article/96b6b498b0b64222a8b165b0898cb5b7
Autor:
Danielle Kara, PhD, Yuchi Liu, PhD, Shi Chen, BSc, Thomas Garrett, BSc, Deborah Kwon, MD, FSCMR, Christopher Nguyen, PhD, FSCMR
Publikováno v:
Journal of Cardiovascular Magnetic Resonance, Vol 26, Iss , Pp 100257- (2024)
Externí odkaz:
https://doaj.org/article/53d506f21e6d4982916a5921a6ddac0c
Autor:
Animesh Tandon, MD, MSc, Thomas Garrett, BSc, Danielle Kara, PhD, Shi Chen, BSc, Christopher Nguyen, PhD, FSCMR
Publikováno v:
Journal of Cardiovascular Magnetic Resonance, Vol 26, Iss , Pp 100546- (2024)
Externí odkaz:
https://doaj.org/article/3fe07646f1b54f09ac6e4389e1bf3887
Meta-reinforcement learning (meta-RL) aims to learn from multiple training tasks the ability to adapt efficiently to unseen test tasks. Despite the success, existing meta-RL algorithms are known to be sensitive to the task distribution shift. When th
Externí odkaz:
http://arxiv.org/abs/2006.08875
Autor:
Yu, Tianhe, Thomas, Garrett, Yu, Lantao, Ermon, Stefano, Zou, James, Levine, Sergey, Finn, Chelsea, Ma, Tengyu
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dang
Externí odkaz:
http://arxiv.org/abs/2005.13239
The aim of multi-task reinforcement learning is two-fold: (1) efficiently learn by training against multiple tasks and (2) quickly adapt, using limited samples, to a variety of new tasks. In this work, the tasks correspond to reward functions for env
Externí odkaz:
http://arxiv.org/abs/1907.04964
Autor:
Thomas, Garrett
Formal methods are valuable design validation techniques which ensure thecorrectness of hardware and software design. Recently, the formal methodstechnique of model checking using temporal logics has shown great promisein the field of control and tas
Externí odkaz:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213390