Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Yuxian Gu"'
Autor:
Zeng Zhou, Yizhang Wei, Liang Geng, Ying Zhang, Yuxian Gu, Alvise Finotello, Andrea D’Alpaos, Zheng Gong, Fan Xu, Changkuan Zhang, Giovanni Coco
Publikováno v:
Nature Communications, Vol 15, Iss 1, Pp 1-8 (2024)
Abstract Parallel tidal channel systems, characterized by commonly cross-shore orientation and regular spacing, represent a distinct class of tidal channel networks in coastal environments worldwide. Intriguingly, these cross-shore oriented channel s
Externí odkaz:
https://doaj.org/article/f2f9630fbbb84624ac99a20666532c76
Autor:
Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan Yao, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, Maosong Sun
Publikováno v:
AI Open, Vol 2, Iss , Pp 216-224 (2021)
In recent years, the size of pre-trained language models (PLMs) has grown by leaps and bounds. However, efficiency issues of these large-scale PLMs limit their utilization in real-world scenarios. We present a suite of cost-effective techniques for t
Externí odkaz:
https://doaj.org/article/7ace9a96658e45949ba2ff0e0ae3cff4
Autor:
Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Lei Liu, Xiaoyan Zhu, Minlie Huang
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems. However, previous works mainly focus on showing and evaluating the conversational performance of the released dialogue model, ignoring the discussion
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9bbd0d7204f2eb0d68768d30da66250f
http://arxiv.org/abs/2203.09313
http://arxiv.org/abs/2203.09313
Autor:
Mi Bai, Shuang Xu, Mingzhu Jiang, Yuxian Guo, Dandan Hu, Jia He, Ting Wang, Yu Zhang, Yan Guo, Yue Zhang, Songming Huang, Zhanjun Jia, Aihua Zhang
Publikováno v:
Advanced Science, Vol 11, Iss 39, Pp n/a-n/a (2024)
Abstract Renal fibrosis is a common pathological feature of chronic kidney disease (CKD) with the proliferation and activation of myofibroblasts being definite effectors and drivers. Here, increased expression of Meis1 (myeloid ecotropic viral integr
Externí odkaz:
https://doaj.org/article/8392b9d1bd0448be8326dc60e440a021
Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks. Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ac8af002f046c4fb9e42921569c9ba12
http://arxiv.org/abs/2109.04332
http://arxiv.org/abs/2109.04332
Autor:
Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6e2437714e4d957ab17f8de129efad27
http://arxiv.org/abs/2106.07139
http://arxiv.org/abs/2106.07139
Publikováno v:
Proceedings of the Second Workshop on Insights from Negative Results in NLP.
Autor:
Maosong Sun, Yujia Qin, Jie Tang, Xiaozhi Wang, Yanan Zheng, Zhenbo Sun, Yuxian Gu, Juanzi Li, Zhengyan Zhang, Jian Guan, Minlie Huang, Shengqi Chen, Yusheng Su, Wentao Han, Pei Ke, Haozhe Ji, Zhiyuan Liu, Guoyang Zeng, Huanqi Cao, Xiaoyan Zhu, Deming Ye, Fanchao Qi, Daixuan Li, Xu Han, Hao Zhou
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570 GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::009f0a80f435f5f401187855ac9e2e5b
http://arxiv.org/abs/2012.00413
http://arxiv.org/abs/2012.00413
Publikováno v:
EMNLP (1)
Scopus-Elsevier
Scopus-Elsevier
Recently, pre-trained language models mostly follow the pre-train-then-fine-tuning paradigm and have achieved great performance on various downstream tasks. However, since the pre-training stage is typically task-agnostic and the fine-tuning stage us
Publikováno v:
Zhong nan da xue xue bao. Yi xue ban = Journal of Central South University. Medical sciences. 44(1)
Gastric neuroendocrine tumors are rarely seen in the gastric tumors, because there are few case reports and the clinical diagnosis rate is low. There is no consensus treatment method in the world. However, with the benefit of esophagogastrodenoscopy