Zobrazeno 1 - 10
of 154
pro vyhledávání: '"Ehrhardt, Matthias J."'
Stochastic optimisation algorithms are the de facto standard for machine learning with large amounts of data. Handling only a subset of available data in each optimisation step dramatically reduces the per-iteration computational costs, while still e
Externí odkaz:
http://arxiv.org/abs/2406.06342
We explore the application of preconditioning in optimisation algorithms, specifically those appearing in Inverse Problems in imaging. Such problems often contain an ill-posed forward operator and are large-scale. Therefore, computationally efficient
Externí odkaz:
http://arxiv.org/abs/2406.00260
We analyze a recently proposed algorithm for the problem of sampling from probability distributions $\mu^\ast$ in $\mathbb{R}^d$ with a Lebesgue density of the form $\mu^\ast(x) \propto \exp(-f(Kx)-g(x))$, where $K$ is a linear operator and $f,g$ con
Externí odkaz:
http://arxiv.org/abs/2405.18098
The Condat-V\~u algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-V\~u, including proximal gradien
Externí odkaz:
http://arxiv.org/abs/2403.17100
Various tasks in data science are modeled utilizing the variational regularization approach, where manually selecting regularization parameters presents a challenge. The difficulty gets exacerbated when employing regularizers involving a large number
Externí odkaz:
http://arxiv.org/abs/2308.10098
In order to solve tasks like uncertainty quantification or hypothesis tests in Bayesian imaging inverse problems, we often have to draw samples from the arising posterior distribution. For the usually log-concave but high-dimensional posteriors, Mark
Externí odkaz:
http://arxiv.org/abs/2306.17737
Autor:
Sherry, Ferdia, Celledoni, Elena, Ehrhardt, Matthias J., Murari, Davide, Owren, Brynjulf, Schönlieb, Carola-Bibiane
Motivated by classical work on the numerical integration of ordinary differential equations we present a ResNet-styled neural network architecture that encodes non-expansive (1-Lipschitz) operators, as long as the spectral norms of the weights are ap
Externí odkaz:
http://arxiv.org/abs/2306.17332
Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information and is weighted by a regularization parameter. Selection o
Externí odkaz:
http://arxiv.org/abs/2305.18394
Autor:
Ehrhardt, Matthias J., Roberts, Lindon
Estimating hyperparameters has been a long-standing problem in machine learning. We consider the case where the task at hand is modeled as the solution to an optimization problem. Here the exact gradient with respect to the hyperparameters cannot be
Externí odkaz:
http://arxiv.org/abs/2301.04764
Autor:
Chambolle, Antonin, Delplancke, Claire, Ehrhardt, Matthias J., Schönlieb, Carola-Bibiane, Tang, Junqi
In this work we propose a new primal-dual algorithm with adaptive step-sizes. The stochastic primal-dual hybrid gradient (SPDHG) algorithm with constant step-sizes has become widely applied in large-scale convex optimization across many scientific fi
Externí odkaz:
http://arxiv.org/abs/2301.02511