Quantitative convergence of trained quantum neural networks to a Gaussian process

Autor: Hernandez, Anderson Melchor, Girardi, Filippo, Pastorello, Davide, De Palma, Giacomo
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: We study quantum neural networks where the generated function is the expectation value of the sum of single-qubit observables across all qubits. In [Girardi \emph{et al.}, arXiv:2402.08726], it is proven that the probability distributions of such generated functions converge in distribution to a Gaussian process in the limit of infinite width for both untrained networks with randomly initialized parameters and trained networks. In this paper, we provide a quantitative proof of this convergence in terms of the Wasserstein distance of order $1$. First, we establish an upper bound on the distance between the probability distribution of the function generated by any untrained network with finite width and the Gaussian process with the same covariance. This proof utilizes Stein's method to estimate the Wasserstein distance of order $1$. Next, we analyze the training dynamics of the network via gradient flow, proving an upper bound on the distance between the probability distribution of the function generated by the trained network and the corresponding Gaussian process. This proof is based on a quantitative upper bound on the maximum variation of a parameter during training. This bound implies that for sufficiently large widths, training occurs in the lazy regime, \emph{i.e.}, each parameter changes only by a small amount. While the convergence result of [Girardi \emph{et al.}, arXiv:2402.08726] holds at a fixed training time, our upper bounds are uniform in time and hold even as $t \to \infty$.
Databáze: arXiv