Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Mulayoff, Rotem"'
When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities
Externí odkaz:
http://arxiv.org/abs/2405.15719
Langevin dynamics (LD) is widely used for sampling from distributions and for optimization. In this work, we derive a closed-form expression for the expected loss of preconditioned LD near stationary points of the objective function. We use the fact
Externí odkaz:
http://arxiv.org/abs/2402.13810
We study the type of solutions to which stochastic gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss. Our results are based on a dynamical stability analysis. In the univariate case,
Externí odkaz:
http://arxiv.org/abs/2306.17499
Autor:
Mulayoff, Rotem, Michaeli, Tomer
The dynamical stability of optimization methods at the vicinity of minima of the loss has recently attracted significant attention. For gradient descent (GD), stable convergence is possible only to minima that are sufficiently flat w.r.t. the step si
Externí odkaz:
http://arxiv.org/abs/2306.07850
Autor:
Haas, René, Huberman-Spiegelglas, Inbar, Mulayoff, Rotem, Graßhof, Stella, Brandt, Sami S., Michaeli, Tomer
Denoising Diffusion Models (DDMs) have emerged as a strong competitor to Generative Adversarial Networks (GANs). However, despite their widespread use in image synthesis and editing applications, their latent space is still not as well understood. Re
Externí odkaz:
http://arxiv.org/abs/2303.11073
In this paper, we propose a spectral method for deriving functions that are jointly smooth on multiple observed manifolds. This allows us to register measurements of the same phenomenon by heterogeneous sensors, and to reject sensor-specific noise. O
Externí odkaz:
http://arxiv.org/abs/2004.04386
Autor:
Mulayoff, Rotem, Michaeli, Tomer
It is well known that (stochastic) gradient descent has an implicit bias towards flat minima. In deep neural network training, this mechanism serves to screen out minima. However, the precise effect that this has on the trained network is not yet ful
Externí odkaz:
http://arxiv.org/abs/2002.04710
Autor:
Mulayoff, Rotem, Michaeli, Tomer
Sparse representation over redundant dictionaries constitutes a good model for many classes of signals (e.g., patches of natural images, segments of speech signals, etc.). However, despite its popularity, very little is known about the representation
Externí odkaz:
http://arxiv.org/abs/1804.04897
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.