Popis: |
We consider the (sub-Riemannian type) control problem of finding a path going from an initial point $x$ to a target point $y$, by only moving in certain admissible directions. We assume that the corresponding vector fields satisfy the bracket-generating (H\"ormander) condition, so that the classical Chow-Rashevskii theorem guarantees the existence of such a path. One natural way to try to solve this problem is via a gradient flow on control space. However, since the corresponding dynamics may have saddle points, any convergence result must rely on suitable (e.g. random) initialisation. We consider the case when this initialisation is irregular, which is conveniently formulated via Lyons' rough path theory. We show that one advantage of this initialisation is that the saddle points are moved to infinity, while minima remain at a finite distance from the starting point. In the step $2$-nilpotent case, we further manage to prove that the gradient flow converges to a solution, if the initial condition is the path of a Brownian motion (or rougher). The proof is based on combining ideas from Malliavin calculus with {\L}ojasiewicz inequalities. A possible motivation for our study comes from the training of deep Residual Neural Nets, in the regime when the number of trainable parameters per layer is smaller than the dimension of the data vector. |