Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Schellekens, Vincent"'
Random data sketching (or projection) is now a classical technique enabling, for instance, approximate numerical linear algebra and machine learning algorithms with reduced computational complexity and memory. In this context, the possibility of perf
Externí odkaz:
http://arxiv.org/abs/2307.14672
Random data sketching (or projection) is now a classical technique enabling, for instance, approximate numerical linear algebra and machine learning algorithms with reduced computational complexity and memory. In this context, the possibility of perf
Externí odkaz:
http://arxiv.org/abs/2212.00660
M$^2$M: A general method to perform various data analysis tasks from a differentially private sketch
Autor:
Houssiau, Florimond, Schellekens, Vincent, Chatalic, Antoine, Annamraju, Shreyas Kumar, de Montjoye, Yves-Alexandre
Differential privacy is the standard privacy definition for performing analyses over sensitive data. Yet, its privacy budget bounds the number of tasks an analyst can perform with reasonable accuracy, which makes it challenging to deploy in practice.
Externí odkaz:
http://arxiv.org/abs/2211.14062
Rank-one projections (ROP) of matrices and quadratic random sketching of signals support several data processing and machine learning methods, as well as recent imaging applications, such as phase retrieval or optical processing units. In this paper,
Externí odkaz:
http://arxiv.org/abs/2205.08225
Autor:
Schellekens, Vincent, Jacques, Laurent
The compressive learning framework reduces the computational cost of training on large-scale datasets. In a sketching phase, the data is first compressed to a lightweight sketch vector, obtained by mapping the data samples through a well-chosen featu
Externí odkaz:
http://arxiv.org/abs/2104.10061
Autor:
Schellekens, Vincent, Jacques, Laurent
In compressive learning, a mixture model (a set of centroids or a Gaussian mixture) is learned from a sketch vector, that serves as a highly compressed representation of the dataset. This requires solving a non-convex optimization problem, hence in p
Externí odkaz:
http://arxiv.org/abs/2009.08273
Autor:
Gribonval, Rémi, Chatalic, Antoine, Keriven, Nicolas, Schellekens, Vincent, Jacques, Laurent, Schniter, Philip
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first cons
Externí odkaz:
http://arxiv.org/abs/2008.01839
Autor:
Schellekens, Vincent, Jacques, Laurent
Publikováno v:
Information and Inference: A Journal of the IMA (2021)
Many signal processing and machine learning applications are built from evaluating a kernel on pairs of signals, e.g. to assess the similarity of an incoming query to a database of known signals. This nonlinear evaluation can be simplified to a linea
Externí odkaz:
http://arxiv.org/abs/2004.06560
Autor:
Schellekens, Vincent, Jacques, Laurent
Generative networks implicitly approximate complex densities from their sampling with impressive accuracy. However, because of the enormous scale of modern datasets, this training process is often computationally expensive. We cast generative network
Externí odkaz:
http://arxiv.org/abs/2002.05095
Autor:
Schellekens, Vincent, Jacques, Laurent
Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images.
C
C
Externí odkaz:
http://arxiv.org/abs/1812.01410