Memory-Sample Lower Bounds for Learning Parity with Noise
Autor: | Garg, Sumegha, Kothari, Pravesh K., Liu, Pengda, Raz, Ran |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
branching program memory-sample tradeoffs Computer Science - Machine Learning Computer Science - Computational Complexity learning parity under noise Theory of computation → Machine learning theory F.2.3 Computational Complexity (cs.CC) space lower bound Machine Learning (cs.LG) |
Popis: | In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn $x=(x_1,\ldots,x_n) \in \{0,1\}^n$ from a stream of random linear equations over $\mathrm{F}_2$ that are correct with probability $\frac{1}{2}+\varepsilon$ and flipped with probability $\frac{1}{2}-\varepsilon$, that any learning algorithm requires either a memory of size $\Omega(n^2/\varepsilon)$ or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [GRT'18], when the samples are noisy. A matrix $M: A \times X \rightarrow \{-1,1\}$ corresponds to the following learning problem with error parameter $\varepsilon$: an unknown element $x \in X$ is chosen uniformly at random. A learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2) \ldots$, where for every $i$, $a_i \in A$ is chosen uniformly at random and $b_i = M(a_i,x)$ with probability $1/2+\varepsilon$ and $b_i = -M(a_i,x)$ with probability $1/2-\varepsilon$ ($0 Comment: 19 pages. To appear in RANDOM 2021. arXiv admin note: substantial text overlap with arXiv:1708.02639 |
Databáze: | OpenAIRE |
Externí odkaz: |