Zobrazeno 1 - 10
of 2 805
pro vyhledávání: '"Zhu,Xiaoyan"'
Autor:
Zhu, Xiaoyan, Li, Hui, Meng, Jing, Feng, Xinwei, Zhen, Zhixuan, Lin, Haoyu, Yu, Bocheng, Cheng, Wenjuan, Jiang, Dongmei, Xu, Yang, Shang, Tian, Zhan, Qingfeng
Publikováno v:
Phys. Rev. B 108, 144437 (2023)
A series of Fe$_x$Rh$_{100-x}$ ($30 \leq x \leq 57$) films were epitaxially grown using magnetron sputtering, and were systematically studied by magnetization-, electrical resistivity-, and Hall resistivity measurements. After optimizing the growth c
Externí odkaz:
http://arxiv.org/abs/2310.07140
Existing evaluation metrics for natural language generation (NLG) tasks face the challenges on generalization ability and interpretability. Specifically, most of the well-performed metrics are required to train on evaluation datasets of specific NLG
Externí odkaz:
http://arxiv.org/abs/2307.06869
Autor:
Niu, Jun, Zhu, Xiaoyan, Zeng, Moxuan, Zhang, Ge, Zhao, Qingyang, Huang, Chunhui, Zhang, Yangming, An, Suyu, Wang, Yangzhong, Yue, Xinghui, He, Zhipeng, Guo, Weihao, Shen, Kuo, Liu, Peng, Shen, Yulong, Jiang, Xiaohong, Ma, Jianfeng, Zhang, Yuqing
Membership inference (MI) attacks threaten user privacy through determining if a given data example has been used to train a target model. However, it has been increasingly recognized that the "comparing different MI attacks" methodology used in the
Externí odkaz:
http://arxiv.org/abs/2307.06123
Autor:
Zhao, Enze1 (AUTHOR), Zhu, Xiaoyan2 (AUTHOR), Tang, Haiwei1 (AUTHOR), Luo, Zhenyu1 (AUTHOR), Zeng, Weinan1 (AUTHOR), Zhou, Zongke1 (AUTHOR) zhouzongke@scu.edu.cn
Publikováno v:
Orthopaedic Surgery. Nov2024, Vol. 16 Issue 11, p2671-2679. 9p.
Autor:
Zhu, Qi, Mi, Fei, Zhang, Zheng, Wang, Yasheng, Li, Yitong, Jiang, Xin, Liu, Qun, Zhu, Xiaoyan, Huang, Minlie
Incorporating external knowledge into the response generation process is essential to building more helpful and reliable dialog agents. However, collecting knowledge-grounded conversations is often costly, calling for a better pre-trained model for g
Externí odkaz:
http://arxiv.org/abs/2212.01739
Publikováno v:
Open Life Sciences, Vol 19, Iss 1, Pp 429-36 (2024)
Externí odkaz:
https://doaj.org/article/04e353dc597642118f61ec0f7efdf72d
Training language models to learn from human instructions for zero-shot cross-task generalization has attracted much attention in NLP communities. Recently, instruction tuning (IT), which fine-tunes a pre-trained language model on a massive collectio
Externí odkaz:
http://arxiv.org/abs/2210.09175
Publikováno v:
In Expert Systems With Applications 1 January 2025 259
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tas
Externí odkaz:
http://arxiv.org/abs/2206.02712