Zobrazeno 1 - 10
of 151
pro vyhledávání: '"Mallat, Stephane"'
Score diffusion methods can learn probability densities from samples. The score of the noise-corrupted density is estimated using a deep neural network, which is then used to iteratively transport a Gaussian white noise density to a target density. V
Externí odkaz:
http://arxiv.org/abs/2410.11646
Autor:
Lempereur, Etienne, Mallat, Stéphane
Finding low-dimensional interpretable models of complex physical fields such as turbulence remains an open question, 80 years after the pioneer work of Kolmogorov. Estimating high-dimensional probability distributions from data samples suffers from a
Externí odkaz:
http://arxiv.org/abs/2405.03468
Publikováno v:
Int'l Conf on Learning Representations (ICLR), vol.12 May 2024. Recipient, Outstanding Paper award
Deep neural networks (DNNs) trained for image denoising are able to generate high-quality samples with score-based reverse diffusion algorithms. These impressive capabilities seem to imply an escape from the curse of dimensionality, but recent report
Externí odkaz:
http://arxiv.org/abs/2310.02557
We introduce a Path Shadowing Monte-Carlo method, which provides prediction of future paths, given any generative model. At any given date, it averages future quantities over generated price paths whose past history matches, or `shadows', the actual
Externí odkaz:
http://arxiv.org/abs/2308.01486
Publikováno v:
PNAS Nexus, Volume 3, Issue 4, April 2024, pgae103
Physicists routinely need probabilistic models for a number of tasks such as parameter inference or the generation of new realizations of a field. Establishing such models for highly non-Gaussian fields is a challenge, especially when the number of s
Externí odkaz:
http://arxiv.org/abs/2306.17210
There is a growing gap between the impressive results of deep image generative models and classical algorithms that offer theoretical guarantees. The former suffer from mode collapse or memorization issues, limiting their application to scientific da
Externí odkaz:
http://arxiv.org/abs/2306.00181
A central question in deep learning is to understand the functions learned by deep networks. What is their approximation class? Do the learned weights and representations depend on initialization? Previous empirical work has evidenced that kernels de
Externí odkaz:
http://arxiv.org/abs/2305.18512
Publikováno v:
ICLR 2023
Deep neural networks can learn powerful prior probability models for images, as evidenced by the high-quality generations obtained with recent score-based diffusion methods. But the means by which these networks capture complex global statistical str
Externí odkaz:
http://arxiv.org/abs/2303.02984
Score-based generative models (SGMs) synthesize new data samples from Gaussian white noise by running a time-reversed Stochastic Differential Equation (SDE) whose drift coefficient depends on some probabilistic score. The discretization of such SDEs
Externí odkaz:
http://arxiv.org/abs/2208.05003
We develop a multiscale approach to estimate high-dimensional probability distributions from a dataset of physical fields or configurations observed in experiments or simulations. In this way we can estimate energy functions (or Hamiltonians) and eff
Externí odkaz:
http://arxiv.org/abs/2207.04941