Zobrazeno 1 - 10
of 89
pro vyhledávání: '"Zhang, Thomas"'
A driving force behind the diverse applicability of modern machine learning is the ability to extract meaningful features across many sources. However, many practical domains involve data that are non-identically distributed across sources, and stati
Externí odkaz:
http://arxiv.org/abs/2410.11227
Representation learning is a powerful tool that enables learning over large multitudes of agents or domains by enforcing that all agents operate on a shared set of learned features. However, many robotics or controls applications that would benefit f
Externí odkaz:
http://arxiv.org/abs/2407.05781
A powerful concept behind much of the recent progress in machine learning is the extraction of common features across data from heterogeneous sources or tasks. Intuitively, using all of one's data to learn a common representation function benefits bo
Externí odkaz:
http://arxiv.org/abs/2308.04428
Autor:
Desai, Ronak, Zhang, Thomas, Oropeza, Ricky, Felice, John J., Smith, Joseph R., Kryshchenko, Alona, Orban, Chris, Dexter, Michael L., Patnaik, Anil K.
Researchers in the field of ultra-intense laser science are beginning to embrace machine learning methods. In this study we consider three different machine learning methods -- a two-hidden layer neural network, Support Vector Regression and Gaussian
Externí odkaz:
http://arxiv.org/abs/2307.16036
While $\mathcal{H}_\infty$ methods can introduce robustness against worst-case perturbations, their nominal performance under conventional stochastic disturbances is often drastically reduced. Though this fundamental tradeoff between nominal performa
Externí odkaz:
http://arxiv.org/abs/2305.16415
Autor:
Zhang, Thomas T., Kang, Katie, Lee, Bruce D., Tomlin, Claire, Levine, Sergey, Tu, Stephen, Matni, Nikolai
We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from
Externí odkaz:
http://arxiv.org/abs/2212.00186
We propose Taylor Series Imitation Learning (TaSIL), a simple augmentation to standard behavior cloning losses in the context of continuous control. TaSIL penalizes deviations in the higher-order Taylor series terms between the learned and expert pol
Externí odkaz:
http://arxiv.org/abs/2205.14812
While $\mathcal{H}_\infty$ methods can introduce robustness against worst-case perturbations, their nominal performance under conventional stochastic disturbances is often drastically reduced. Though this fundamental tradeoff between nominal performa
Externí odkaz:
http://arxiv.org/abs/2203.10763
Autor:
Zhang, Thomas T. C. K., Tu, Stephen, Boffi, Nicholas M., Slotine, Jean-Jacques E., Matni, Nikolai
Motivated by bridging the simulation to reality gap in the context of safety-critical systems, we consider learning adversarially robust stability certificates for unknown nonlinear dynamical systems. In line with approaches from robust control, we c
Externí odkaz:
http://arxiv.org/abs/2112.10690
Adversarially robust training has been shown to reduce the susceptibility of learned models to targeted input data perturbations. However, it has also been observed that such adversarially robust models suffer a degradation in accuracy when applied t
Externí odkaz:
http://arxiv.org/abs/2111.08864