Zobrazeno 1 - 10
of 7 303
pro vyhledávání: '"MALLAT, A."'
Score diffusion methods can learn probability densities from samples. The score of the noise-corrupted density is estimated using a deep neural network, which is then used to iteratively transport a Gaussian white noise density to a target density. V
Externí odkaz:
http://arxiv.org/abs/2410.11646
Autor:
Lempereur, Etienne, Mallat, Stéphane
Finding low-dimensional interpretable models of complex physical fields such as turbulence remains an open question, 80 years after the pioneer work of Kolmogorov. Estimating high-dimensional probability distributions from data samples suffers from a
Externí odkaz:
http://arxiv.org/abs/2405.03468
Publikováno v:
Int'l Conf on Learning Representations (ICLR), vol.12 May 2024. Recipient, Outstanding Paper award
Deep neural networks (DNNs) trained for image denoising are able to generate high-quality samples with score-based reverse diffusion algorithms. These impressive capabilities seem to imply an escape from the curse of dimensionality, but recent report
Externí odkaz:
http://arxiv.org/abs/2310.02557
We introduce a Path Shadowing Monte-Carlo method, which provides prediction of future paths, given any generative model. At any given date, it averages future quantities over generated price paths whose past history matches, or `shadows', the actual
Externí odkaz:
http://arxiv.org/abs/2308.01486
Publikováno v:
PNAS Nexus, Volume 3, Issue 4, April 2024, pgae103
Physicists routinely need probabilistic models for a number of tasks such as parameter inference or the generation of new realizations of a field. Establishing such models for highly non-Gaussian fields is a challenge, especially when the number of s
Externí odkaz:
http://arxiv.org/abs/2306.17210
There is a growing gap between the impressive results of deep image generative models and classical algorithms that offer theoretical guarantees. The former suffer from mode collapse or memorization issues, limiting their application to scientific da
Externí odkaz:
http://arxiv.org/abs/2306.00181
A central question in deep learning is to understand the functions learned by deep networks. What is their approximation class? Do the learned weights and representations depend on initialization? Previous empirical work has evidenced that kernels de
Externí odkaz:
http://arxiv.org/abs/2305.18512
Autor:
Robert J. Cueto, BS, Kevin A. Hao, BS, Daniel S. O’Keefe, BS, Marlee A. Mallat, BS, Keegan M. Hones, MD, Lacie M. Turnbull, MD, Jonathan O. Wright, MD, Jose Soberon, MD, Bradley S. Schoch, MD, Joseph J. King, MD
Publikováno v:
JSES International, Vol 8, Iss 4, Pp 866-872 (2024)
Background: Biomechanical research demonstrates increased subscapularis abduction range of motion (ROM) when the tendon’s upper two-thirds is repaired over-the-top of the center of rotation during reverse shoulder arthroplasty (RSA). This study com
Externí odkaz:
https://doaj.org/article/410a996044ee4a59b804899e5b7921ce
Publikováno v:
ICLR 2023
Deep neural networks can learn powerful prior probability models for images, as evidenced by the high-quality generations obtained with recent score-based diffusion methods. But the means by which these networks capture complex global statistical str
Externí odkaz:
http://arxiv.org/abs/2303.02984
Recent works have shown that selecting an optimal model architecture suited to the differential privacy setting is necessary to achieve the best possible utility for a given privacy budget using differentially private stochastic gradient descent (DP-
Externí odkaz:
http://arxiv.org/abs/2302.02910