From inexact optimization to learning via gradient concentration
Autor: | Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computational Mathematics Computer Science - Machine Learning Control and Optimization Statistics - Machine Learning Optimization and Control (math.OC) Applied Mathematics FOS: Mathematics Machine Learning (stat.ML) Mathematics - Optimization and Control Machine Learning (cs.LG) |
DOI: | 10.48550/arxiv.2106.05397 |
Popis: | Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning. |
Databáze: | OpenAIRE |
Externí odkaz: |