Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Chen, Zaiyi"'
With the rising popularity of Transformer-based large language models (LLMs), reducing their high inference costs has become a significant research focus. One effective approach is to compress the long input contexts. Existing methods typically lever
Externí odkaz:
http://arxiv.org/abs/2406.13618
Projected Gradient Ascent (PGA) is the most commonly used optimization scheme in machine learning and operations research areas. Nevertheless, numerous studies and examples have shown that the PGA methods may fail to achieve the tight approximation r
Externí odkaz:
http://arxiv.org/abs/2401.08330
This paper studies the online stochastic resource allocation problem (RAP) with chance constraints. The online RAP is a 0-1 integer linear programming problem where the resource consumption coefficients are revealed column by column along with the co
Externí odkaz:
http://arxiv.org/abs/2303.03254
Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximizati
Externí odkaz:
http://arxiv.org/abs/2208.08681
In this paper, we revisit the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, which finds wide real-world applications in the domain of machine learning, economics, and operations research. At first, w
Externí odkaz:
http://arxiv.org/abs/2208.07632
This paper studies the online stochastic resource allocation problem (RAP) with chance constraints and conditional expectation constraints. The online RAP is an integer linear programming problem where resource consumption coefficients are revealed c
Externí odkaz:
http://arxiv.org/abs/2203.16818
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient asce
Externí odkaz:
http://arxiv.org/abs/2201.00703
In this paper, we investigate the online allocation problem of maximizing the overall revenue subject to both lower and upper bound constraints. Compared to the extensively studied online problems with only resource upper bounds, the two-sided constr
Externí odkaz:
http://arxiv.org/abs/2112.13964
Publikováno v:
Front. Comput. Sci. 14, 145309 (2020)
Adaptive learning rate methods have been successfully applied in many fields, especially in training deep neural networks. Recent results have shown that adaptive methods with exponential increasing weights on squared past gradients (i.e., ADAM, RMSP
Externí odkaz:
http://arxiv.org/abs/2101.00238
Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of approaches have been proposed for learning with noisy labels, many open issues remain. In this paper, we show that
Externí odkaz:
http://arxiv.org/abs/1908.06112