Zobrazeno 1 - 10
of 56
pro vyhledávání: '"Ma Chenxin"'
Autor:
Wang, Guoli, Ma, Chenxin
Publikováno v:
Kybernetes, 2023, Vol. 53, Issue 4, pp. 1450-1483.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/K-09-2022-1233
Autor:
Ma, Chenxin, Ji, Xiang
Publikováno v:
In Finance Research Letters July 2024 65
Autor:
Jahani, Majid, He, Xi, Ma, Chenxin, Mokhtari, Aryan, Mudigere, Dheevatsa, Ribeiro, Alejandro, Takáč, Martin
In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy. Our proposed metho
Externí odkaz:
http://arxiv.org/abs/1810.11507
Distributed optimization algorithms are essential for training machine learning models on very large-scale datasets. However, they often suffer from communication bottlenecks. Confronting this issue, a communication-efficient primal-dual coordinate a
Externí odkaz:
http://arxiv.org/abs/1711.05305
Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences
In this work we introduce the concept of an Underestimate Sequence (UES), which is motivated by Nesterov's estimate sequence. Our definition of a UES utilizes three sequences, one of which is a lower bound (or under-estimator) of the objective functi
Externí odkaz:
http://arxiv.org/abs/1710.03695
Autor:
Smith, Virginia, Forte, Simone, Ma, Chenxin, Takac, Martin, Jordan, Michael I., Jaggi, Martin
The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication sc
Externí odkaz:
http://arxiv.org/abs/1611.02189
Autor:
Ma, Chenxin, Takáč, Martin
In this paper we study inexact dumped Newton method implemented in a distributed environment. We start with an original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 20
Externí odkaz:
http://arxiv.org/abs/1603.05191
Autor:
Ma, Chenxin, Konečný, Jakub, Jaggi, Martin, Smith, Virginia, Jordan, Michael I., Richtárik, Peter, Takáč, Martin
With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developin
Externí odkaz:
http://arxiv.org/abs/1512.04039
Autor:
Ma, Chenxin, Takáč, Martin
In this paper we study the effect of the way that the data is partitioned in distributed optimization. The original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 2015]
Externí odkaz:
http://arxiv.org/abs/1510.06688
In this paper we generalize the framework of the feasible descent method (FDM) to a randomized (R-FDM) and a coordinate-wise random feasible descent method (RC-FDM) framework. We show that the famous SDCA algorithm for optimizing the SVM dual problem
Externí odkaz:
http://arxiv.org/abs/1506.02530