Zobrazeno 1 - 10
of 119
pro vyhledávání: '"Kelner, Jonathan A."'
We give a new algorithm for learning mixtures of $k$ Gaussians (with identity covariance in $\mathbb{R}^n$) to TV error $\varepsilon$, with quasi-polynomial ($O(n^{\text{poly log}\left(\frac{n+k}{\varepsilon}\right)})$) time and sample complexity, un
Externí odkaz:
http://arxiv.org/abs/2404.18869
It is well-known that the statistical performance of Lasso can suffer significantly when the covariates of interest have strong correlations. In particular, the prediction error of Lasso becomes much worse than computationally inefficient alternative
Externí odkaz:
http://arxiv.org/abs/2402.15409
We give a new framework for solving the fundamental problem of low-rank matrix completion, i.e., approximating a rank-$r$ matrix $\mathbf{M} \in \mathbb{R}^{m \times n}$ (where $m \ge n$) from random observations. First, we provide an algorithm which
Externí odkaz:
http://arxiv.org/abs/2308.03661
Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian $N(0,\Sigma)$, and we seek an estimator with small excess risk. I
Externí odkaz:
http://arxiv.org/abs/2305.16892
We analyze Riemannian Hamiltonian Monte Carlo (RHMC) for sampling a polytope defined by $m$ inequalities in $\R^n$ endowed with the metric defined by the Hessian of a convex barrier function. The advantage of RHMC over Euclidean methods such as the b
Externí odkaz:
http://arxiv.org/abs/2303.00480
Sparse recovery is one of the most fundamental and well-studied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearly-linear time) iterative me
Externí odkaz:
http://arxiv.org/abs/2203.04002
Sparse linear regression with ill-conditioned Gaussian random designs is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief, even in the form of examples that are hard for rest
Externí odkaz:
http://arxiv.org/abs/2203.02824
Autor:
Kelner, Jonathan, Marsden, Annie, Sharan, Vatsal, Sidford, Aaron, Valiant, Gregory, Yuan, Honglin
We provide new gradient-based methods for efficiently solving a broad class of ill-conditioned optimization problems. We consider the problem of minimizing a function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ which is implicitly decomposable as the s
Externí odkaz:
http://arxiv.org/abs/2111.03137
Sparse linear regression is a fundamental problem in high-dimensional statistics, but strikingly little is known about how to efficiently solve it without restrictive conditions on the design matrix. We consider the (correlated) random design setting
Externí odkaz:
http://arxiv.org/abs/2106.09207
Autor:
Ahmadinejad, AmirMahdi, Kelner, Jonathan, Murtagh, Jack, Peebles, John, Sidford, Aaron, Vadhan, Salil
We provide a deterministic $\tilde{O}(\log N)$-space algorithm for estimating random walk probabilities on undirected graphs, and more generally Eulerian directed graphs, to within inverse polynomial additive error ($\epsilon=1/\mathrm{poly}(N)$) whe
Externí odkaz:
http://arxiv.org/abs/1912.04524