Zobrazeno 1 - 10
of 92
pro vyhledávání: '"Arlot, Sylvain"'
We study conformal prediction in the one-shot federated learning setting. The main goal is to compute marginally and training-conditionally valid prediction sets, at the server-level, in only one round of communication between the agents and the serv
Externí odkaz:
http://arxiv.org/abs/2405.12567
In this paper, we introduce a conformal prediction method to construct prediction sets in a oneshot federated learning setting. More specifically, we define a quantile-of-quantiles estimator and prove that for any distribution, it is possible to outp
Externí odkaz:
http://arxiv.org/abs/2302.06322
Identifying the relevant variables for a classification model with correct confidence levels is a central but difficult task in high-dimension. Despite the core role of sparse logistic regression in statistics and machine learning, it still lacks a g
Externí odkaz:
http://arxiv.org/abs/2205.14613
Greedy algorithms for feature selection are widely used for recovering sparse high-dimensional vectors in linear models. In classical procedures, the main emphasis was put on the sample complexity, with little or no consideration of the computation r
Externí odkaz:
http://arxiv.org/abs/2011.11117
We develop an extension of the Knockoff Inference procedure, introduced by Barber and Candes (2015). This new method, called Aggregation of Multiple Knockoffs (AKO), addresses the instability inherent to the random nature of Knockoff-based inference.
Externí odkaz:
http://arxiv.org/abs/2002.09269
Autor:
Arlot, Sylvain
Publikováno v:
Journal de la Societe Fran{\c c}aise de Statistique, Societe Fran{\c c}aise de Statistique et Societe Mathematique de France, Vol 106, No.3, 158-168. 2019
This text is the rejoinder following the discussion of a survey paper about minimal penalties and the slope heuristics (Arlot, 2019. Minimal penalties and the slope heuristics: a survey. Journal de la SFDS). While commenting on the remarks made by th
Externí odkaz:
http://arxiv.org/abs/1909.13499
Aggregated hold-out (Agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split). We provide the first theoretical guarantees on Agghoo, ensuring that it can be used safely: Agghoo performs a
Externí odkaz:
http://arxiv.org/abs/1909.04890
Autor:
Arlot, Sylvain
Publikováno v:
Journal de la Societe Fran{\c c}aise de Statistique, Societe Fran{\c c}aise de Statistique et Societe Mathematique de France, 2019, Minimal penalties and the slope heuristics: a survey, 160 (3), pp.1-106
Birg{\'e} and Massart proposed in 2001 the slope heuristics as a way to choose optimally from data an unknown multiplicative constant in front of a penalty. It is built upon the notion of minimal penalty, and it has been generalized since to some "mi
Externí odkaz:
http://arxiv.org/abs/1901.07277
Publikováno v:
Nonlinearity, Volume 32 (2019), Number 7, 2564-2592
We propose a new model for the time evolution of livestock commodities which exhibits endogenous deterministic stochastic behaviour. The model is based on the Yoccoz-Birkeland integral equation, a model first developed for studying the time-evolution
Externí odkaz:
http://arxiv.org/abs/1803.05404
Cross-validation is widely used for selecting among a family of learning rules. This paper studies a related method, called aggregated hold-out (Agghoo), which mixes cross-validation with aggregation; Agghoo can also be related to bagging. According
Externí odkaz:
http://arxiv.org/abs/1709.03702