Hierarchical Learning for Quantum ML: Novel Training Technique for Large-Scale Variational Quantum Circuits
Autor: | Gharibyan, Hrant, Su, Vincent, Tepanyan, Hayk |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We present hierarchical learning, a novel variational architecture for efficient training of large-scale variational quantum circuits. We test and benchmark our technique for distribution loading with quantum circuit born machines (QCBMs). With QCBMs, probability distributions are loaded into the squared amplitudes of computational basis vectors represented by bitstrings. Our key insight is to take advantage of the fact that the most significant (qu)bits have a greater effect on the final distribution and can be learned first. One can think of it as a generalization of layerwise learning, where some parameters of the variational circuit are learned first to prevent the phenomena of barren plateaus. We briefly review adjoint methods for computing the gradient, in particular for loss functions that are not expectation values of observables. We first compare the role of connectivity in the variational ansatz for the task of loading a Gaussian distribution on nine qubits, finding that 2D connectivity greatly outperforms qubits arranged on a line. Based on our observations, we then implement this strategy on large-scale numerical experiments with GPUs, training a QCBM to reproduce a 3-dimensional multivariate Gaussian distribution on 27 qubits up to $\sim4\%$ total variation distance. Though barren plateau arguments do not strictly apply here due to the objective function not being tied to an observable, this is to our knowledge the first practical demonstration of variational learning on large numbers of qubits. We also demonstrate hierarchical learning as a resource-efficient way to load distributions for existing quantum hardware (IBM's 7 and 27 qubit devices) in tandem with Fire Opal optimizations. Comment: 22 pages, 15 figures |
Databáze: | arXiv |
Externí odkaz: |