Zobrazeno 1 - 10
of 3 306
pro vyhledávání: '"minimax risk"'
Sparse linear regression is one of the classical and extensively studied problems in high-dimensional statistics and compressed sensing. Despite the substantial body of literature dedicated to this problem, the precise determination of its minimax ri
Externí odkaz:
http://arxiv.org/abs/2405.05344
High-dimensional data is common in multiple areas, such as health care and genomics, where the number of features can be tens of thousands. In such scenarios, the large number of features often leads to inefficient learning. Constraint generation met
Externí odkaz:
http://arxiv.org/abs/2306.06649
Autor:
Kipnis, Alon
We study the problem of testing the goodness of fit of occurrences of items from many categories to an identical Poisson distribution over the categories. As a class of alternative hypotheses, we consider the removal of an $\ell_p$ ball, $p \leq 2$,
Externí odkaz:
http://arxiv.org/abs/2305.18111
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Basu, A. K.
Publikováno v:
Lecture Notes-Monograph Series, 2006 Jan 01. 49, 312-321.
Externí odkaz:
https://www.jstor.org/stable/4356405
Autor:
Klemelä, Jussi
Publikováno v:
Scandinavian Journal of Statistics, 1999 Sep 01. 26(3), 465-473.
Externí odkaz:
https://www.jstor.org/stable/4616568
The theoretical advances on the properties of scoring rules over the past decades have broadened the use of scoring rules in probabilistic forecasting. In meteorological forecasting, statistical postprocessing techniques are essential to improve the
Externí odkaz:
http://arxiv.org/abs/2205.04360
Autor:
Chen, Jeesen
Publikováno v:
The Canadian Journal of Statistics / La Revue Canadienne de Statistique, 1997 Dec 01. 25(4), 545-558.
Externí odkaz:
https://www.jstor.org/stable/3315347
Supervised classification techniques use training samples to learn a classification rule with small expected 0-1 loss (error probability). Conventional methods enable tractable learning and provide out-of-sample generalization by using surrogate loss
Externí odkaz:
http://arxiv.org/abs/2201.06487
We derive a risk lower bound in estimating the threshold parameter without knowing whether the threshold regression model is continuous or not. The bound goes to zero as the sample size $ n $ grows only at the cube root rate. Motivated by this findin
Externí odkaz:
http://arxiv.org/abs/2203.00349