Zobrazeno 1 - 10
of 111
pro vyhledávání: '"Loewen, Philip"'
Autor:
Wang, Shuyuan, Duan, Jingliang, Lawrence, Nathan P., Loewen, Philip D., Forbes, Michael G., Gopaluni, R. Bhushan, Zhang, Lixian
Model-free reinforcement learning (RL) is inherently a reactive method, operating under the assumption that it starts with no prior knowledge of the system and entirely depends on trial-and-error for learning. This approach faces several challenges,
Externí odkaz:
http://arxiv.org/abs/2410.16821
Autor:
Lawrence, Nathan P., Loewen, Philip D., Wang, Shuyuan, Forbes, Michael G., Gopaluni, R. Bhushan
Willems' fundamental lemma enables a trajectory-based characterization of linear systems through data-based Hankel matrices. However, in the presence of measurement noise, we ask: Is this noisy Hankel-based model expressive enough to re-identify itse
Externí odkaz:
http://arxiv.org/abs/2404.15512
Autor:
Lawrence, Nathan P., Loewen, Philip D., Wang, Shuyuan, Forbes, Michael G., Gopaluni, R. Bhushan
Publikováno v:
Automatica 2024
We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kucera parameterization to define t
Externí odkaz:
http://arxiv.org/abs/2310.14098
Autor:
Wang, Shuyuan, Loewen, Philip D., Lawrence, Nathan P., Forbes, Michael G., Gopaluni, R. Bhushan
Publikováno v:
IFAC-PapersOnLine 2023
We adapt reinforcement learning (RL) methods for continuous control to bridge the gap between complete ignorance and perfect knowledge of the environment. Our method, Partial Knowledge Least Squares Policy Iteration (PLSPI), takes inspiration from bo
Externí odkaz:
http://arxiv.org/abs/2304.13223
Autor:
Lawrence, Nathan P., Loewen, Philip D., Wang, Shuyuan, Forbes, Michael G., Gopaluni, R. Bhushan
Publikováno v:
IFAC-PapersOnLine 2023
We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kucera parameterization to define t
Externí odkaz:
http://arxiv.org/abs/2304.03422
Autor:
McClement, Daniel G., Lawrence, Nathan P., Forbes, Michael G., Loewen, Philip D., Backström, Johan U., Gopaluni, R. Bhushan
Meta-learning is a branch of machine learning which aims to synthesize data from a distribution of related tasks to efficiently solve new ones. In process control, many systems have similar and well-understood dynamics, which suggests it is feasible
Externí odkaz:
http://arxiv.org/abs/2209.09301
Autor:
McClement, Daniel G., Lawrence, Nathan P., Backstrom, Johan U., Loewen, Philip D., Forbes, Michael G., Gopaluni, R. Bhushan
Publikováno v:
Journal of Process Control 2022
Meta-learning is a branch of machine learning which trains neural network models to synthesize a wide variety of data in order to rapidly solve new problems. In process control, many systems have similar and well-understood dynamics, which suggests i
Externí odkaz:
http://arxiv.org/abs/2203.09661
Autor:
Lawrence, Nathan P., Forbes, Michael G., Loewen, Philip D., McClement, Daniel G., Backstrom, Johan U., Gopaluni, R. Bhushan
Publikováno v:
Control Engineering Practice 2022
Deep reinforcement learning (RL) is an optimization-driven framework for producing control strategies for general dynamical systems without explicit reliance on process models. Good results have been reported in simulation. Here we demonstrate the ch
Externí odkaz:
http://arxiv.org/abs/2111.07171
Autor:
Lawrence, Nathan P., Loewen, Philip D., Forbes, Michael G., Backström, Johan U., Gopaluni, R. Bhushan
Publikováno v:
Advances in Neural Information Processing Systems, volume 33, pages 18942--18953, 2020
We introduce a method for learning provably stable deep neural network based dynamic models from observed data. Specifically, we consider discrete-time stochastic dynamic models, as they are of particular interest in practical applications such as es
Externí odkaz:
http://arxiv.org/abs/2103.14722
Autor:
McClement, Daniel G., Lawrence, Nathan P., Loewen, Philip D., Forbes, Michael G., Backström, Johan U., Gopaluni, R. Bhushan
Meta-learning is a branch of machine learning which aims to quickly adapt models, such as neural networks, to perform new tasks by learning an underlying structure across related tasks. In essence, models are being trained to learn new tasks effectiv
Externí odkaz:
http://arxiv.org/abs/2103.14060