Zobrazeno 1 - 10
of 58
pro vyhledávání: '"HE, YUANQIN"'
Federated Learning (FL) has emerged as a promising solution for collaborative training of large language models (LLMs). However, the integration of LLMs into FL introduces new challenges, particularly concerning the evaluation of LLMs. Traditional ev
Externí odkaz:
http://arxiv.org/abs/2404.12273
Autor:
Song, Yuanfeng, He, Yuanqin, Zhao, Xuefang, Gu, Hanlin, Jiang, Di, Yang, Haijun, Fan, Lixin, Yang, Qiang
The springing up of Large Language Models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm. Along this line of research endeavors in the area
Externí odkaz:
http://arxiv.org/abs/2310.18358
Autor:
Kang, Yan, Gu, Hanlin, Tang, Xingxing, He, Yuanqin, Zhang, Yuzhu, He, Jinnan, Han, Yuxing, Fan, Lixin, Chen, Kai, Yang, Qiang
Conventionally, federated learning aims to optimize a single objective, typically the utility. However, for a federated learning system to be trustworthy, it needs to simultaneously satisfy multiple/many objectives, such as maximizing model performan
Externí odkaz:
http://arxiv.org/abs/2305.00312
Autor:
Liu, Yang, Kang, Yan, Zou, Tianyuan, Pu, Yanhong, He, Yuanqin, Ye, Xiaozhou, Ouyang, Ye, Zhang, Ya-Qin, Yang, Qiang
Publikováno v:
IEEE Transactions on Knowledge and Data Engineering 2024
Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with different features about the same set of users jointly train machine learning models without exposing their raw data or model parameters. Motivated by the r
Externí odkaz:
http://arxiv.org/abs/2211.12814
Federated learning (FL) has emerged as a practical solution to tackle data silo issues without compromising user privacy. One of its variants, vertical federated learning (VFL), has recently gained increasing attention as the VFL matches the enterpri
Externí odkaz:
http://arxiv.org/abs/2209.03885
Vertical federated learning (VFL), a variant of Federated Learning (FL), has recently drawn increasing attention as the VFL matches the enterprises' demands of leveraging more valuable features to achieve better model performance. However, convention
Externí odkaz:
http://arxiv.org/abs/2208.08934
In a vertical federated learning (VFL) scenario where features and model are split into different parties, communications of sample-specific updates are required for correct gradient calculations but can be used to deduce important sample-level label
Externí odkaz:
http://arxiv.org/abs/2112.05409
Publikováno v:
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022
Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data. Recent works demonstrate that information exchanged during FL is subject to gradient-based p
Externí odkaz:
http://arxiv.org/abs/2111.08211
Federated Learning (FL) provides both model performance and data privacy for machine learning tasks where samples or features are distributed among different parties. In the training process of FL, no party has a global view of data distributions or
Externí odkaz:
http://arxiv.org/abs/2101.11896
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.