Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Zhouyuan Huo"'
Publikováno v:
Machine Learning.
Autor:
Zhouyuan Huo, Dongseong Hwang, Khe Chai Sim, Shefali Garg, Ananya Misra, Nikhil Siddhartha, Trevor Strohman, Francoise Beaufays
Publikováno v:
Interspeech 2022.
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:7883-7890
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications. Warmup is one of nontrivial techniques to stabilize the convergence of large batch training. However, warmup is an empirical
Autor:
Shefali Garg, Khe Chai Sim, Dongseong Hwang, Arun Narayanan, Nikhil Siddhartha, Ananya Misra, Zhouyuan Huo
Publikováno v:
Interspeech 2021.
Publikováno v:
AAAI
Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems. However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal
Publikováno v:
AAAI
Pairwise learning is an important learning topic in the machine learning community, where the loss function involves pairs of samples (e.g., AUC maximization and metric learning). Existing pairwise learning algorithms do not perform well in the gener
Publikováno v:
IEEE transactions on neural networks and learning systems. 33(11)
The privacy-preserving federated learning for vertically partitioned (VP) data has shown promising results as the solution of the emerging multiparty joint modeling application, in which the data holders (such as government branches, private finance,
Publikováno v:
IEEE transactions on pattern analysis and machine intelligence. 44(7)
Kernel methods have achieved tremendous success in the past two decades. In the current big data era, data collection has grown tremendously. However, existing kernel methods are not scalable enough both at the training and predicting steps. To addre
Autor:
Dongseong Hwang, Ananya Misra, Zhouyuan Huo, Nikhil Siddhartha, Shefali Garg, David Qiu, Khe Chai Sim, Trevor Strohman, Francoise Beaufays, Yanzhang He
Self- and semi-supervised learning methods have been actively investigated to reduce labeled training data or enhance the model performance. However, the approach mostly focus on in-domain performance for public datasets. In this study, we utilize th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4b718b7312c309fb9365fb75efcab212
Autor:
Lie He, Sebastian U. Stich, Mariana Raykova, Phillip B. Gibbons, Mehryar Mohri, David Evans, Badih Ghazi, Felix X. Yu, Sen Zhao, Jianyu Wang, Zheng Xu, Weikang Song, Prateek Mittal, Ramesh Raskar, Zachary Garrett, Farinaz Koushanfar, H. Brendan McMahan, Ayfer Ozgur, Mikhail Khodak, Rafael G. L. D'Oliveira, Jakub Konecní, Aurélien Bellet, Arjun Nitin Bhagoji, Hubert Eichner, Han Yu, Adrià Gascón, Ananda Theertha Suresh, Sanmi Koyejo, Praneeth Vepakomma, Josh Gardner, Chaoyang He, Florian Tramèr, Tancrède Lepoint, Salim El Rouayheb, Peter Kairouz, Li Xiong, Kallista Bonawitz, Rasmus Pagh, Tara Javidi, Mehdi Bennis, Dawn Song, Martin Jaggi, Zhouyuan Huo, Hang Qi, Gauri Joshi, Qiang Yang, Richard Nock, Yang Liu, Brendan Avent, Justin Hsu, Rachel Cummings, Graham Cormode, Marco Gruteser, Aleksandra Korolova, Ziteng Sun, Zaid Harchaoui, Ben Hutchinson, Zachary Charles, Daniel Ramage
Publikováno v:
Foundations and Trends in Machine Learning
Foundations and Trends in Machine Learning, 2021, 14 (1-2), pp.1-210
Foundations and Trends in Machine Learning, Now Publishers, 2021, 14 (1-2), pp.1-210
Foundations and Trends in Machine Learning, 2021, 14 (1-2), pp.1-210
Foundations and Trends in Machine Learning, Now Publishers, 2021, 14 (1-2), pp.1-210
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data d
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0b1ccc10027ba1ce68ce0210510e8bdc
https://inria.hal.science/hal-02406503v2/document
https://inria.hal.science/hal-02406503v2/document