Zobrazeno 1 - 10
of 5 094
pro vyhledávání: '"Pierre, C"'
Autor:
Tan, Kai, Bellec, Pierre C.
This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the
Externí odkaz:
http://arxiv.org/abs/2410.02629
We characterize the squared prediction risk of ensemble estimators obtained through subagging (subsample bootstrap aggregating) regularized M-estimators and construct a consistent estimator for the risk. Specifically, we consider a heterogeneous coll
Externí odkaz:
http://arxiv.org/abs/2409.15252
Autor:
Bellec, Pierre C., Tan, Kai
This paper investigates the iterates $\hbb^1,\dots,\hbb^T$ obtained from iterative algorithms in high-dimensional linear regression problems, in the regime where the feature dimension $p$ is comparable with the sample size $n$, i.e., $p \asymp n$. Th
Externí odkaz:
http://arxiv.org/abs/2404.17856
Autor:
Bellec, Pierre C, Koriyama, Takuya
This paper studies the asymptotics of resampling without replacement in the proportional regime where dimension $p$ and sample size $n$ are of the same order. For a given dataset $(X,y)\in \mathbb{R}^{n\times p}\times \mathbb{R}^n$ and fixed subsampl
Externí odkaz:
http://arxiv.org/abs/2404.02070
Autor:
Bellec, Pierre C., Koriyama, Takuya
We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that $p/n \to \gamma\in
Externí odkaz:
http://arxiv.org/abs/2312.13257
Autor:
Bellec, Pierre C., Koriyama, Takuya
Major progress has been made in the previous decade to characterize the asymptotic behavior of regularized M-estimators in high-dimensional regression problems in the proportional asymptotic regime where the sample size $n$ and the number of features
Externí odkaz:
http://arxiv.org/abs/2312.13254
Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we exami
Externí odkaz:
http://arxiv.org/abs/2310.01374