Zobrazeno 1 - 10
of 68
pro vyhledávání: '"Chatalic, P."'
Autor:
Caldarelli, Edoardo, Chatalic, Antoine, Colomé, Adrià, Molinari, Cesare, Ocampo-Martinez, Carlos, Torras, Carme, Rosasco, Lorenzo
In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems. While kernel methods have typically large computational requirements, we show how random subspaces (Nys
Externí odkaz:
http://arxiv.org/abs/2403.02811
In this work we consider the problem of numerical integration, i.e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand. We focus on the setting in which the target distribution is o
Externí odkaz:
http://arxiv.org/abs/2311.13548
Autor:
Meanti, Giacomo, Chatalic, Antoine, Kostic, Vladimir R., Novelli, Pietro, Pontil, Massimiliano, Rosasco, Lorenzo
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces ca
Externí odkaz:
http://arxiv.org/abs/2306.04520
M$^2$M: A general method to perform various data analysis tasks from a differentially private sketch
Autor:
Houssiau, Florimond, Schellekens, Vincent, Chatalic, Antoine, Annamraju, Shreyas Kumar, de Montjoye, Yves-Alexandre
Differential privacy is the standard privacy definition for performing analyses over sensitive data. Yet, its privacy budget bounds the number of tasks an analyst can perform with reasonable accuracy, which makes it challenging to deploy in practice.
Externí odkaz:
http://arxiv.org/abs/2211.14062
Publikováno v:
ICML 2022
Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We
Externí odkaz:
http://arxiv.org/abs/2201.13055
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i.e. a vector of generalized moments. The learning task is then approximately solved as an inverse pro
Externí odkaz:
http://arxiv.org/abs/2110.10996
Autor:
Gribonval, Rémi, Chatalic, Antoine, Keriven, Nicolas, Schellekens, Vincent, Jacques, Laurent, Schniter, Philip
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first cons
Externí odkaz:
http://arxiv.org/abs/2008.01839
In sketched clustering, a dataset of $T$ samples is first sketched down to a vector of modest size, from which the centroids are subsequently extracted. Advantages include i) reduced storage complexity and ii) centroid extraction complexity independe
Externí odkaz:
http://arxiv.org/abs/1712.02849
Experts do not always feel very, comfortable when they have to give precise numerical estimations of certainty degrees. In this paper we present a qualitative approach which allows for attaching partially ordered symbolic grades to logical formulas.
Externí odkaz:
http://arxiv.org/abs/1303.5395
Publikováno v:
Journal Of Artificial Intelligence Research, Volume 25, pages 269-314, 2006
In a peer-to-peer inference system, each peer can reason locally but can also solicit some of its acquaintances, which are peers sharing part of its vocabulary. In this paper, we consider peer-to-peer inference systems in which the local theory of ea
Externí odkaz:
http://arxiv.org/abs/1109.5716