Online Learning with Adversaries: A Differential Inclusion Analysis
Autor: | Swetha Ganesh, Reiffers-Masson, Alexandre, Gugan Thoppe |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2023 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Optimization and Control (math.OC) FOS: Electrical engineering electronic engineering information engineering FOS: Mathematics Systems and Control (eess.SY) Electrical Engineering and Systems Science - Systems and Control Mathematics - Optimization and Control Machine Learning (cs.LG) |
DOI: | 10.13140/rg.2.2.13425.76648 |
Popis: | We consider the measurement model $Y = AX,$ where $X$ and, hence, $Y$ are random variables and $A$ is an a priori known tall matrix. At each time instance, a sample of one of $Y$'s coordinates is available, and the goal is to estimate $\mu := \mathbb{E}[X]$ via these samples. However, the challenge is that a small but unknown subset of $Y$'s coordinates are controlled by adversaries with infinite power: they can return any real number each time they are queried for a sample. For such an adversarial setting, we propose the first asynchronous online algorithm that converges to $\mu$ almost surely. We prove this result using a novel differential inclusion based two-timescale analysis. Two key highlights of our proof include: (a) the use of a novel Lyapunov function for showing that $\mu$ is the unique global attractor for our algorithm's limiting dynamics, and (b) the use of martingale and stopping time theory to show that our algorithm's iterates are almost surely bounded. |
Databáze: | OpenAIRE |
Externí odkaz: |