Zobrazeno 1 - 10
of 74
pro vyhledávání: '"DENG Yuyang"'
Publikováno v:
He jishu, Vol 45, Iss 5, Pp 34-40 (2022)
Background210Bi is one of the most important interferences in the analysis of 90Sr by extractive chromatography. It is verified by the leaching curve that bismuth and yttrium are completely adsorbed on the chromatographic column in the bis(2-ethylhex
Externí odkaz:
https://doaj.org/article/f4dcd35b74b048d5bec6f728609b0647
Stochastic compositional minimax problems are prevalent in machine learning, yet there are only limited established on the convergence of this class of problems. In this paper, we propose a formal definition of the stochastic compositional minimax pr
Externí odkaz:
http://arxiv.org/abs/2408.12505
Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization. However, a rigorous understanding of how the representation function learned on an unlabeled dataset affect
Externí odkaz:
http://arxiv.org/abs/2403.06871
Autor:
Deng, Yuyang, Qiao, Mingda
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total. Unlike in the usual collaborative learning setu
Externí odkaz:
http://arxiv.org/abs/2402.10445
Since its launch, ChatGPT has achieved remarkable success as a versatile conversational AI platform, drawing millions of users worldwide and garnering widespread recognition across academic, industrial, and general communities. This paper aims to poi
Externí odkaz:
http://arxiv.org/abs/2312.10078
This paper advocates a new paradigm Personalized Empirical Risk Minimization (PERM) to facilitate learning from heterogeneous data sources without imposing stringent constraints on computational resources shared by participating devices. In PERM, we
Externí odkaz:
http://arxiv.org/abs/2310.17761
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors. This attack casts significant privacy challenges on distributed learning from clients with sensitive data, where clients are require
Externí odkaz:
http://arxiv.org/abs/2309.13016
We consider the problem of learning a model from multiple heterogeneous sources with the goal of performing well on a new target distribution. The goal of learner is to mix these data sources in a target-distribution aware way and simultaneously mini
Externí odkaz:
http://arxiv.org/abs/2309.10736
Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation. How to transfer robustness across different domains has been a key question in
Externí odkaz:
http://arxiv.org/abs/2302.12351
Despite the established convergence theory of Optimistic Gradient Descent Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax problems, little is known about the theoretical guarantees of these methods in nonconvex settings. T
Externí odkaz:
http://arxiv.org/abs/2210.09382