Zobrazeno 1 - 10
of 50
pro vyhledávání: '"Golovin, Daniel"'
Autor:
Song, Xingyou, Zhang, Qiuyi, Lee, Chansoo, Fertig, Emily, Huang, Tzu-Kuo, Belenki, Lior, Kochanski, Greg, Ariafar, Setareh, Vasudevan, Srinivas, Perel, Sagi, Golovin, Daniel
Google Vizier has performed millions of optimizations and accelerated numerous research and production systems at Google, demonstrating the success of Bayesian optimization as a large-scale service. Over multiple years, its algorithm has been improve
Externí odkaz:
http://arxiv.org/abs/2408.11527
Autor:
Golovin, Daniel, Bartok, Gabor, Chen, Eric, Donahue, Emily, Huang, Tzu-Kuo, Kokiopoulou, Efi, Qin, Ruoyan, Sarda, Nikhil, Sybrandt, Justin, Tjeng, Vincent
In many software systems, heuristics are used to make decisions - such as cache eviction, task scheduling, and information presentation - that have a significant impact on overall system behavior. While machine learning may outperform these heuristic
Externí odkaz:
http://arxiv.org/abs/2304.13033
Vizier is the de-facto blackbox and hyperparameter optimization service across Google, having optimized some of Google's largest products and research efforts. To operate at the scale of tuning thousands of users' critical systems, Google Vizier solv
Externí odkaz:
http://arxiv.org/abs/2207.13676
Autor:
Golovin, Daniel, Zhang, Qiuyi
Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective $f(x)$, given evaluations at adaptively chosen inputs $x$. In this paper, we consider multi-objective optimization, wher
Externí odkaz:
http://arxiv.org/abs/2006.04655
Publikováno v:
ICLR 2020 Spotlight
Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$. In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely o
Externí odkaz:
http://arxiv.org/abs/1911.06317
Which ads should we display in sponsored search in order to maximize our revenue? How should we dynamically rank information sources to maximize the value of the ranking? These applications exhibit strong diminishing returns: Redundancy decreases the
Externí odkaz:
http://arxiv.org/abs/1407.1082
We reduce the memory footprint of popular large-scale online learning methods by projecting our weight vector onto a coarse discrete set using randomized rounding. Compared to standard 32-bit float encodings, this reduces RAM usage by more than 50% d
Externí odkaz:
http://arxiv.org/abs/1303.4664
Autor:
Golovin, Daniel, Krause, Andreas
Many important problems in discrete optimization require maximization of a monotonic submodular function subject to matroid constraints. For these problems, a simple greedy algorithm is guaranteed to obtain near-optimal solutions. In this article, we
Externí odkaz:
http://arxiv.org/abs/1101.4450
We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-fre
Externí odkaz:
http://arxiv.org/abs/1010.3091
Autor:
Golovin, Daniel
In previous work, the author introduced the B-treap, a uniquely represented B-tree analogue, and proved strong performance guarantees for it. However, the B-treap maintains complex invariants and is very complex to implement. In this paper we introdu
Externí odkaz:
http://arxiv.org/abs/1005.0662