Zobrazeno 1 - 10
of 135
pro vyhledávání: '"Klivans, Adam"'
Autor:
Chandrasekaran, Gautam, Klivans, Adam
We consider the fundamental problem of learning the parameters of an undirected graphical model or Markov Random Field (MRF) in the setting where the edge weights are chosen at random. For Ising models, we show that a multiplicative-weight update alg
Externí odkaz:
http://arxiv.org/abs/2411.11174
The seminal work of Linial, Mansour, and Nisan gave a quasipolynomial-time algorithm for learning constant-depth circuits ($\mathsf{AC}^0$) with respect to the uniform distribution on the hypercube. Extending their algorithm to the setting of malicio
Externí odkaz:
http://arxiv.org/abs/2411.03570
Autor:
Chandrasekaran, Gautam, Klivans, Adam, Kontonis, Vasilis, Meka, Raghu, Stavropoulos, Konstantinos
In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on $\mathbb{R}^d \times \{\pm 1\}$ -- is to output a hypothesis that is competitive (to within $\epsilon$) of the best fitting
Externí odkaz:
http://arxiv.org/abs/2407.00966
Autor:
Chandrasekaran, Gautam, Klivans, Adam R., Kontonis, Vasilis, Stavropoulos, Konstantinos, Vasilyan, Arsen
A fundamental notion of distance between train and test distributions from the field of domain adaptation is discrepancy distance. While in general hard to compute, here we provide the first set of provably efficient algorithms for testing localized
Externí odkaz:
http://arxiv.org/abs/2406.09373
Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution $\mathcal{D}$, unlabeled samples from test distrib
Externí odkaz:
http://arxiv.org/abs/2404.02364
We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution $D$, unlabeled samples from test distribution $D'$ and is asked to output a classifier with low test error.
Externí odkaz:
http://arxiv.org/abs/2311.15142
Stabilizing proteins is a foundational step in protein engineering. However, the evolutionary pressure of all extant proteins makes identifying the scarce number of mutations that will improve thermodynamic stability challenging. Deep learning has re
Externí odkaz:
http://arxiv.org/abs/2310.12979
Recent works have shown that diffusion models can learn essentially any distribution provided one can perform score estimation. Yet it remains poorly understood under what settings score estimation is possible, let alone when practical gradient-based
Externí odkaz:
http://arxiv.org/abs/2307.01178
We give the first result for agnostically learning Single-Index Models (SIMs) with arbitrary monotone and Lipschitz activations. All prior work either held only in the realizable setting or required the activation to be known. Moreover, we only requi
Externí odkaz:
http://arxiv.org/abs/2306.10615
Autor:
Ravula, Sriram, Gorti, Varun, Deng, Bo, Chakraborty, Swagato, Pingenot, James, Mutnury, Bhyrav, Wallace, Doug, Winterberg, Doug, Klivans, Adam, Dimakis, Alexandros G.
A key problem when modeling signal integrity for passive filters and interconnects in IC packages is the need for multiple S-parameter measurements within a desired frequency band to obtain adequate resolution. These samples are often computationally
Externí odkaz:
http://arxiv.org/abs/2306.04001