Zobrazeno 1 - 10
of 287
pro vyhledávání: '"Wang, Puyu"'
While considerable theoretical progress has been devoted to the study of metric and similarity learning, the generalization mystery is still missing. In this paper, we study the generalization performance of metric and similarity learning by leveragi
Externí odkaz:
http://arxiv.org/abs/2405.06415
In this paper, we are concerned with the generalization performance of non-parametric estimation for pairwise learning. Most of the existing work requires the hypothesis space to be convex or a VC-class, and the loss to be convex. However, these rest
Externí odkaz:
http://arxiv.org/abs/2305.19640
Recently, significant progress has been made in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-laye
Externí odkaz:
http://arxiv.org/abs/2305.16891
Recently there is a large amount of work devoted to the study of Markov chain stochastic gradient methods (MC-SGMs) which mainly focus on their convergence analysis for solving minimization problems. In this paper, we provide a comprehensive generali
Externí odkaz:
http://arxiv.org/abs/2209.08005
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance of developi
Externí odkaz:
http://arxiv.org/abs/2209.04188
Autor:
Yu, Fengchen, Wang, Puyu, Liu, Lin, Li, Hongliang, Zhang, Zhengyong, Dai, Yuping, Wang, Fanglong, Chen, Puchen, Zhang, Mingyu, Gao, Yu
Publikováno v:
In Science of the Total Environment 20 September 2024 944
Pairwise learning refers to learning tasks where the loss function depends on a pair of instances. It instantiates many important machine learning tasks such as bipartite ranking and metric learning. A popular approach to handle streaming data in pai
Externí odkaz:
http://arxiv.org/abs/2111.12050
Publikováno v:
In Neurocomputing 7 June 2024 585
Randomized coordinate descent (RCD) is a popular optimization algorithm with wide applications in solving various machine learning problems, which motivates a lot of theoretical analysis on its convergence behavior. As a comparison, there is no work
Externí odkaz:
http://arxiv.org/abs/2108.07414
Publikováno v:
Applied and Computational Harmonic Analysis 56 (2022): 306-336
In this paper, we are concerned with differentially private {stochastic gradient descent (SGD)} algorithms in the setting of stochastic convex optimization (SCO). Most of the existing work requires the loss to be Lipschitz continuous and strongly smo
Externí odkaz:
http://arxiv.org/abs/2101.08925