Sparsity in long-time control of neural ODEs
Autor: | Borjan Geshkovski, Carlos Esteve-Yagüe |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning General Computer Science Optimization and Control (math.OC) Control and Systems Engineering Statistics - Machine Learning Mechanical Engineering FOS: Mathematics Machine Learning (stat.ML) Electrical and Electronic Engineering Mathematics - Optimization and Control Machine Learning (cs.LG) |
Popis: | We consider the neural ODE and optimal control perspective of supervised learning, with $\ell^1$-control penalties, where rather than only minimizing a final cost (the \emph{empirical risk}) for the state, we integrate this cost over the entire time horizon. We prove that any optimal control (for this cost) vanishes beyond some positive stopping time. When seen in the discrete-time context, this result entails an \emph{ordered} sparsity pattern for the parameters of the associated residual neural network: ordered in the sense that these parameters are all $0$ beyond a certain layer. Furthermore, we provide a polynomial stability estimate for the empirical risk with respect to the time horizon. This can be seen as a \emph{turnpike property}, for nonsmooth dynamics and functionals with $\ell^1$-penalties, and without any smallness assumptions on the data, both of which are new in the literature. |
Databáze: | OpenAIRE |
Externí odkaz: |