Robust 1-Bit Compressed Sensing via Hinge Loss Minimization

Autor: Alexander Stollenwerk, Martin Genzel
Jazyk: angličtina
Rok vydání: 2018
Předmět:
FOS: Computer and information sciences
Statistics and Probability
Computer Science - Information Theory
Convex set
Mathematics - Statistics Theory
010103 numerical & computational mathematics
Statistics Theory (math.ST)
02 engineering and technology
01 natural sciences
Upper and lower bounds
Piecewise linear function
symbols.namesake
010104 statistics & probability
94A12
60D05
90C25

Hinge loss
Taylor series
FOS: Mathematics
0202 electrical engineering
electronic engineering
information engineering

Applied mathematics
0101 mathematics
Mathematics
Numerical Analysis
Information Theory (cs.IT)
Applied Mathematics
Estimator
020206 networking & telecommunications
Compressed sensing
Computational Theory and Mathematics
Bounded function
symbols
Convex function
Algorithm
Analysis
Energy (signal processing)
Popis: This work theoretically studies the problem of estimating a structured high-dimensional signal $\boldsymbol{x}_0 \in{\mathbb{R}}^n$ from noisy $1$-bit Gaussian measurements. Our recovery approach is based on a simple convex program which uses the hinge loss function as data fidelity term. While such a risk minimization strategy is very natural to learn binary output models, such as in classification, its capacity to estimate a specific signal vector is largely unexplored. A major difficulty is that the hinge loss is just piecewise linear, so that its ‘curvature energy’ is concentrated in a single point. This is substantially different from other popular loss functions considered in signal estimation, e.g. the square or logistic loss, which are at least locally strongly convex. It is therefore somewhat unexpected that we can still prove very similar types of recovery guarantees for the hinge loss estimator, even in the presence of strong noise. More specifically, our non-asymptotic error bounds show that stable and robust reconstruction of $\boldsymbol{x}_0$ can be achieved with the optimal oversampling rate $O(m^{-1/2})$ in terms of the number of measurements $m$. Moreover, we permit a wide class of structural assumptions on the ground truth signal, in the sense that $\boldsymbol{x}_0$ can belong to an arbitrary bounded convex set $K \subset{\mathbb{R}}^n$. The proofs of our main results rely on some recent advances in statistical learning theory due to Mendelson. In particular, we invoke an adapted version of Mendelson’s small ball method that allows us to establish a quadratic lower bound on the error of the first-order Taylor approximation of the empirical hinge loss function.
Databáze: OpenAIRE