Zobrazeno 1 - 10
of 174
pro vyhledávání: '"Li Zitao"'
Publikováno v:
Nanotechnology Reviews, Vol 12, Iss 1, Pp 530-9 (2023)
Externí odkaz:
https://doaj.org/article/0510db922d644290ac087e6fa6c77adf
Publikováno v:
Hydrology Research, Vol 53, Iss 12, Pp 1480-1493 (2022)
We analyzed the characteristics of main karstic/non-karst reaches of the Lijiang River to uncover the causes behind different flood behaviors by providing a better understanding of the flood formation. Having 63 years of rainfall-runoff data and appl
Externí odkaz:
https://doaj.org/article/02b1a885ea0d40ccaef4c05620c2978b
Zero-shot reasoning methods with Large Language Models (LLMs) offer significant advantages including great generalization to novel tasks and reduced dependency on human-crafted examples. However, the current zero-shot methods still have limitations i
Externí odkaz:
http://arxiv.org/abs/2410.19000
Data synthesis is a promising solution to share data for various downstream analytic tasks without exposing raw data. However, without a theoretical privacy guarantee, a synthetic dataset would still leak some sensitive information. Differential priv
Externí odkaz:
http://arxiv.org/abs/2406.19008
Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data. However, many domain-specific data are privately distributed across multiple owners. Thus, this dilemma raises the inter
Externí odkaz:
http://arxiv.org/abs/2406.17706
Recent studies reveal that local differential privacy (LDP) protocols are vulnerable to data poisoning attacks where an attacker can manipulate the final estimate on the server by leveraging the characteristics of LDP and sending carefully crafted da
Externí odkaz:
http://arxiv.org/abs/2403.19510
Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA injects a product of two trainable rank deco
Externí odkaz:
http://arxiv.org/abs/2403.12313
Vertical Federated Learning (VFL) has emerged as a popular machine learning paradigm, enabling model training across the data and the task parties with different features about the same user set while preserving data privacy. In production environmen
Externí odkaz:
http://arxiv.org/abs/2402.15247
Autor:
Gao, Dawei, Li, Zitao, Pan, Xuchen, Kuang, Weirui, Ma, Zhijian, Qian, Bingchen, Wei, Fei, Zhang, Wenhao, Xie, Yuexiang, Chen, Daoyuan, Yao, Liuyi, Peng, Hongyi, Zhang, Zeyu, Zhu, Lin, Cheng, Chen, Shi, Hongzhu, Li, Yaliang, Ding, Bolin, Zhou, Jingren
With the rapid advancement of Large Language Models (LLMs), significant progress has been made in multi-agent applications. However, the complexities in coordinating agents' cooperation and LLMs' erratic performance pose notable challenges in develop
Externí odkaz:
http://arxiv.org/abs/2402.14034
Autor:
Kuang, Weirui, Qian, Bingchen, Li, Zitao, Chen, Daoyuan, Gao, Dawei, Pan, Xuchen, Xie, Yuexiang, Li, Yaliang, Ding, Bolin, Zhou, Jingren
LLMs have demonstrated great capabilities in various NLP tasks. Different entities can further improve the performance of those LLMs on their specific downstream tasks by fine-tuning LLMs. When several entities have similar interested tasks, but thei
Externí odkaz:
http://arxiv.org/abs/2309.00363