Zobrazeno 1 - 10
of 157
pro vyhledávání: '"Meka, Raghu"'
Autor:
Chandrasekaran, Gautam, Klivans, Adam, Kontonis, Vasilis, Meka, Raghu, Stavropoulos, Konstantinos
In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on $\mathbb{R}^d \times \{\pm 1\}$ -- is to output a hypothesis that is competitive (to within $\epsilon$) of the best fitting
Externí odkaz:
http://arxiv.org/abs/2407.00966
We study the differentially private (DP) empirical risk minimization (ERM) problem under the semi-sensitive DP setting where only some features are sensitive. This generalizes the Label DP setting where only the label is sensitive. We give improved u
Externí odkaz:
http://arxiv.org/abs/2406.19040
A core component present in many successful neural network architectures, is an MLP block of two fully connected layers with a non-linear activation in between. An intriguing phenomenon observed empirically, including in transformer architectures, is
Externí odkaz:
http://arxiv.org/abs/2406.17989
It is well-known that the statistical performance of Lasso can suffer significantly when the covariates of interest have strong correlations. In particular, the prediction error of Lasso becomes much worse than computationally inefficient alternative
Externí odkaz:
http://arxiv.org/abs/2402.15409
We revisit the fundamental Boolean Matrix Multiplication (BMM) problem. With the invention of algebraic fast matrix multiplication over 50 years ago, it also became known that BMM can be solved in truly subcubic $O(n^\omega)$ time, where $\omega<3$;
Externí odkaz:
http://arxiv.org/abs/2311.09095
Deep networks typically learn concepts via classifiers, which involves setting up a model and training it via gradient descent to fit the concept-labeled data. We will argue instead that learning a concept could be done by looking at its moment stati
Externí odkaz:
http://arxiv.org/abs/2310.12143
Previous work on user-level differential privacy (DP) [Ghazi et al. NeurIPS 2021, Bun et al. STOC 2023] obtained generic algorithms that work for various learning tasks. However, their focus was on the example-rich regime, where the users have so man
Externí odkaz:
http://arxiv.org/abs/2309.12500
We study the power of randomness in the Number-on-Forehead (NOF) model in communication complexity. We construct an explicit 3-player function $f:[N]^3 \to \{0,1\}$, such that: (i) there exist a randomized NOF protocol computing it that sends a const
Externí odkaz:
http://arxiv.org/abs/2308.12451
Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian $N(0,\Sigma)$, and we seek an estimator with small excess risk. I
Externí odkaz:
http://arxiv.org/abs/2305.16892
We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees. The convergence rates of this mechanism are similar to those in the prior work of Levy et al. (2021); Narayanan et al. (2022), but
Externí odkaz:
http://arxiv.org/abs/2305.04912