Zobrazeno 1 - 10
of 1 899
pro vyhledávání: '"Orabona, P."'
Autor:
Sokolov, Georgy, Thiessen, Maximilian, Akhmejanova, Margarita, Vitale, Fabio, Orabona, Francesco
We study the problem of learning the clusters of a given graph in the self-directed learning setup. This learning setting is a variant of online learning, where rather than an adversary determining the sequence in which nodes are presented, the learn
Externí odkaz:
http://arxiv.org/abs/2409.01428
Autor:
Jacobsen, Andrew, Orabona, Francesco
We study the problem of dynamic regret minimization in online convex optimization, in which the objective is to minimize the difference between the cumulative loss of an algorithm and that of an arbitrary sequence of comparators. While the literature
Externí odkaz:
http://arxiv.org/abs/2406.01577
Let $f(\theta, X_1),$ $ \dots,$ $ f(\theta, X_n)$ be a sequence of random elements, where $f$ is a fixed scalar function, $X_1, \dots, X_n$ are independent random variables (data), and $\theta$ is a random parameter distributed according to some data
Externí odkaz:
http://arxiv.org/abs/2402.09201
Autor:
Chen, Keyi, Orabona, Francesco
Due to its speed and simplicity, subgradient descent is one of the most used optimization algorithms in convex machine learning algorithms. However, tuning its learning rate is probably its most severe bottleneck to achieve consistent good performanc
Externí odkaz:
http://arxiv.org/abs/2307.11955
Autor:
Meterez, Alexandru, Joudaki, Amir, Orabona, Francesco, Immer, Alexander, Rätsch, Gunnar, Daneshmand, Hadi
Normalization layers are one of the key building blocks for deep neural networks. Several theoretical studies have shown that batch normalization improves the signal propagation, by avoiding the representations from becoming collinear across the laye
Externí odkaz:
http://arxiv.org/abs/2310.02012
Autor:
Orabona, Francesco
In this short note, I show how to adapt to H\"{o}lder smoothness using normalized gradients in a black-box way. Moreover, the bound will depend on a novel notion of local H\"{o}lder smoothness. The main idea directly comes from Levy [2017].
Externí odkaz:
http://arxiv.org/abs/2308.05621
Autor:
Chen, Keyi, Orabona, Francesco
We propose a new class of online learning algorithms, generalized implicit Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework. Generalized implicit FTRL can recover known algorithms, as FTRL with linearized losses and impl
Externí odkaz:
http://arxiv.org/abs/2306.00201
Autor:
Marine Goudelin, Bruno Evrard, Roxana Donisanu, Céline Gonzalez, Christophe Truffy, Marie Orabona, Antoine Galy, François-Xavier Lapébie, Yvan Jamilloux, Elodie Vandeix, Dominique Belcour, Charles Hodler, Lucie Ramirez, Rémi Gagnoud, Catherine Chapellas, Philippe Vignon
Publikováno v:
Annals of Intensive Care, Vol 14, Iss 1, Pp 1-8 (2024)
Abstract Background The objective was to assess the agreement between therapeutic proposals derived from basic critical care echocardiography performed by novice operators in ultrasonography after a limited training (residents) and by experts conside
Externí odkaz:
https://doaj.org/article/f675b5aa404b4530954372369af47ff9
We consider the problem of estimating the mean of a sequence of random elements $f(X_1, \theta)$ $, \ldots, $ $f(X_n, \theta)$ where $f$ is a fixed scalar function, $S=(X_1, \ldots, X_n)$ are independent random variables, and $\theta$ is a possibly $
Externí odkaz:
http://arxiv.org/abs/2302.05829
We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a $(\delta,\epsilon)$-stationary point from $O(\epsilon^{-4}\delta^
Externí odkaz:
http://arxiv.org/abs/2302.03775