Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Chatalic, Antoine"'
Autor:
Caldarelli, Edoardo, Chatalic, Antoine, Colomé, Adrià, Molinari, Cesare, Ocampo-Martinez, Carlos, Torras, Carme, Rosasco, Lorenzo
In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems. While kernel methods have typically large computational requirements, we show how random subspaces (Nys
Externí odkaz:
http://arxiv.org/abs/2403.02811
In this work we consider the problem of numerical integration, i.e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand. We focus on the setting in which the target distribution is o
Externí odkaz:
http://arxiv.org/abs/2311.13548
Autor:
Meanti, Giacomo, Chatalic, Antoine, Kostic, Vladimir R., Novelli, Pietro, Pontil, Massimiliano, Rosasco, Lorenzo
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces ca
Externí odkaz:
http://arxiv.org/abs/2306.04520
M$^2$M: A general method to perform various data analysis tasks from a differentially private sketch
Autor:
Houssiau, Florimond, Schellekens, Vincent, Chatalic, Antoine, Annamraju, Shreyas Kumar, de Montjoye, Yves-Alexandre
Differential privacy is the standard privacy definition for performing analyses over sensitive data. Yet, its privacy budget bounds the number of tasks an analyst can perform with reasonable accuracy, which makes it challenging to deploy in practice.
Externí odkaz:
http://arxiv.org/abs/2211.14062
Publikováno v:
ICML 2022
Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We
Externí odkaz:
http://arxiv.org/abs/2201.13055
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i.e. a vector of generalized moments. The learning task is then approximately solved as an inverse pro
Externí odkaz:
http://arxiv.org/abs/2110.10996
Autor:
Gribonval, Rémi, Chatalic, Antoine, Keriven, Nicolas, Schellekens, Vincent, Jacques, Laurent, Schniter, Philip
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first cons
Externí odkaz:
http://arxiv.org/abs/2008.01839
In sketched clustering, a dataset of $T$ samples is first sketched down to a vector of modest size, from which the centroids are subsequently extracted. Advantages include i) reduced storage complexity and ii) centroid extraction complexity independe
Externí odkaz:
http://arxiv.org/abs/1712.02849
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Chatalic, Antoine
Publikováno v:
Machine Learning [cs.LG]. Université Rennes 1, 2020. English. ⟨NNT : 2020REN1S030⟩
Machine Learning [cs.LG]. Université de rennes 1, 2020. English
Machine Learning [cs.LG]. Université de rennes 1, 2020. English
The topic of this Ph.D. thesis lies on the borderline between signal processing, statistics and computer science. It mainly focuses on compressive learning, a paradigm for large-scale machine learning in which the whole dataset is compressed down to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::015d53c79cf8589423e7590ba5e33ed8
https://theses.hal.science/tel-03023287v2/document
https://theses.hal.science/tel-03023287v2/document