Zobrazeno 1 - 10
of 402
pro vyhledávání: '"Jin, Kexin"'
Deep neural networks and other modern machine learning models are often susceptible to adversarial attacks. Indeed, an adversary may often be able to change a model's prediction through a small, directed perturbation of the model's input - an issue i
Externí odkaz:
http://arxiv.org/abs/2407.08678
Decentralized Stochastic Gradient Descent (SGD) is an emerging neural network training approach that enables multiple agents to train a model collaboratively and simultaneously. Rather than using a central parameter server to collect gradients from a
Externí odkaz:
http://arxiv.org/abs/2306.00256
The Stochastic Gradient Langevin Dynamics (SGLD) are popularly used to approximate Bayesian posterior distributions in statistical learning procedures with large-scale data. As opposed to many usual Markov chain Monte Carlo (MCMC) algorithms, SGLD is
Externí odkaz:
http://arxiv.org/abs/2305.13882
Autor:
Zhang, Yi-Fan, Wang, Xue, Jin, Kexin, Yuan, Kun, Zhang, Zhang, Wang, Liang, Jin, Rong, Tan, Tieniu
Publikováno v:
The Fortieth International Conference on Machine Learning, ICML, 2023
Many recent machine learning tasks focus to develop models that can generalize to unseen distributions. Domain generalization (DG) has become one of the key topics in various fields. Several literatures show that DG can be arbitrarily hard without ex
Externí odkaz:
http://arxiv.org/abs/2304.12566
Decentralized optimization is an emerging paradigm in distributed learning in which agents achieve network-wide solutions by peer-to-peer communication without the central server. Since communication tends to be slower than computation, when each age
Externí odkaz:
http://arxiv.org/abs/2210.07881
The training of modern machine learning models often consists in solving high-dimensional non-convex optimisation problems that are subject to large-scale data. In this context, momentum-based stochastic optimisation algorithms have become particular
Externí odkaz:
http://arxiv.org/abs/2209.03705
Autor:
Li, Xueai, Jin, Kexin, Qu, Xiangyan, Shi, Yuning, Wang, Chunsheng, Guo, Wanchun, Tian, Kesong, Wang, Yahui, Wang, Haiyan
Publikováno v:
In Chemical Engineering Journal 15 October 2024 498
Autor:
Oriol, Albert, Hájek, Roman, Spicka, Ivan, Sandhu, Irwindeep, Cohen, Yael C., Gatt, Moshe E., Mariz, José, Cavo, Michele, Berdeja, Jesús, Jin, Kexin, Bar, Merav, Das, Prianka, Motte-Mohs, Ross La, Wang, Yu, Perumal, Deepak, Costa, Luciano J.
Publikováno v:
In Clinical Lymphoma, Myeloma and Leukemia October 2024 24(10):703-714
Autor:
Song, Wenqi, Zhao, Binqing, Liu, Di, Cherubini, Paolo, Li, Xingxing, Jin, Kexin, Mu, Changcheng, Wang, Xiaochun
Publikováno v:
In Ecological Indicators September 2024 166
Publikováno v:
Journal of Machine Learning Research 24(274), pp. 1-48, 2023
Optimization problems with continuous data appear in, e.g., robust machine learning, functional data analysis, and variational inference. Here, the target function is given as an integral over a family of (continuously) indexed target functions - int
Externí odkaz:
http://arxiv.org/abs/2112.03754