Zobrazeno 1 - 10
of 41
pro vyhledávání: '"Sangnier, Maxime"'
Autor:
Bonnet, Anna, Sangnier, Maxime
This paper addresses nonparametric estimation of nonlinear multivariate Hawkes processes, where the interaction functions are assumed to lie in a reproducing kernel Hilbert space (RKHS). Motivated by applications in neuroscience, the model allows com
Externí odkaz:
http://arxiv.org/abs/2411.00621
Classic estimation methods for Hawkes processes rely on the assumption that observed event times are indeed a realisation of a Hawkes process, without considering any potential perturbation of the model. However, in practice, observations are often a
Externí odkaz:
http://arxiv.org/abs/2405.12581
The multivariate Hawkes process is a past-dependent point process used to model the relationship of event occurrences between different phenomena.Although the Hawkes process was originally introduced to describe excitation effects, which means that o
Externí odkaz:
http://arxiv.org/abs/2205.04107
Publikováno v:
Statistics and Probability Letters, Elsevier, 2021
In this paper, we present a maximum likelihood method for estimating the parameters of a univariate Hawkes process with self-excitation or inhibition. Our work generalizes techniques and results that were restricted to the self-exciting scenario. The
Externí odkaz:
http://arxiv.org/abs/2103.05299
Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants. Motivated by these observations, we study the recently introduced GroupSort neural networks, with constraints o
Externí odkaz:
http://arxiv.org/abs/2006.05254
Publikováno v:
Journal of Machine Learning Research, Microtome Publishing, 2021
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the
Externí odkaz:
http://arxiv.org/abs/2006.02682
Publikováno v:
In Neurocomputing 1 February 2023 520:301-319
Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variab
Externí odkaz:
http://arxiv.org/abs/1808.09670
Machine learning has witnessed tremendous success in solving tasks depending on a single hyperparameter. When considering simultaneously a finite number of tasks, multi-task learning enables one to account for the similarities of the tasks via approp
Externí odkaz:
http://arxiv.org/abs/1805.08809
Publikováno v:
The Annals of Statistics, 2020 Jun 01. 48(3), 1539-1566.
Externí odkaz:
https://www.jstor.org/stable/26931522