Zobrazeno 1 - 10
of 32
pro vyhledávání: '"SAHA, ANKAN"'
We consider the problem of solving a large-scale Quadratically Constrained Quadratic Program. Such problems occur naturally in many scientific and web applications. Although there are efficient methods which tackle this problem, they are mostly not s
Externí odkaz:
http://arxiv.org/abs/1710.01163
Ranking items to be recommended to users is one of the main problems in large scale social media applications. This problem can be set up as a multi-objective optimization problem to allow for trading off multiple, potentially conflicting objectives
Externí odkaz:
http://arxiv.org/abs/1602.04391
Multi-objective optimization (MOO) is a well-studied problem for several important recommendation problems. While multiple approaches have been proposed, in this work, we focus on using constrained optimization formulations (e.g., quadratic and linea
Externí odkaz:
http://arxiv.org/abs/1602.03131
This paper considers the stability of online learning algorithms and its implications for learnability (bounded regret). We introduce a novel quantity called {\em forward regret} that intuitively measures how good an online learning algorithm is if i
Externí odkaz:
http://arxiv.org/abs/1211.6158
A Support Vector Method for multivariate performance measures was recently introduced by Joachims (2005). The underlying optimization problem is currently solved using cutting plane methods such as SVM-Perf and BMRM. One can show that these algorithm
Externí odkaz:
http://arxiv.org/abs/1202.3776
Nesterov's accelerated gradient methods (AGM) have been successfully applied in many machine learning areas. However, their empirical performance on training max-margin models has been inferior to existing specialized solvers. In this paper, we first
Externí odkaz:
http://arxiv.org/abs/1011.0472
Autor:
Saha, Ankan, Tewari, Ambuj
Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in machine learning. Reasons for this include its simplicity, speed and stability, as well as its competitive performance on $\ell_1$ regularized s
Externí odkaz:
http://arxiv.org/abs/1005.2146
Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (\mcn) is an effective approach. All state-of-the-art algorithms for optimizing \mcn\ objectives take at least $O(1/\
Externí odkaz:
http://arxiv.org/abs/1003.1354
Regularized risk minimization with the binary hinge loss and its variants lies at the heart of many machine learning problems. Bundle methods for regularized risk minimization (BMRM) and the closely related SVMStruct are considered the best general p
Externí odkaz:
http://arxiv.org/abs/0909.1334
Given $n$ points in a $d$ dimensional Euclidean space, the Minimum Enclosing Ball (MEB) problem is to find the ball with the smallest radius which contains all $n$ points. We give a $O(nd\Qcal/\sqrt{\epsilon})$ approximation algorithm for producing a
Externí odkaz:
http://arxiv.org/abs/0909.1062