Zobrazeno 1 - 10
of 815
pro vyhledávání: '"Xu Xiaohan"'
Publikováno v:
Xiehe Yixue Zazhi, Vol 15, Iss 4, Pp 839-844 (2024)
ObjectiveTo analyze the correlation between preoperative hypoproteinemia and the risk of perioperative allogeneic erythrocyte transfusion in ovarian cancer patients undergoing cytoreductive surgery.MethodsOvarian cancer patients who underwent cytored
Externí odkaz:
https://doaj.org/article/93a527d1baa4475d8b143a0623befbd7
Publikováno v:
Xiehe Yixue Zazhi, Vol 14, Iss 4, Pp 820-825 (2023)
Objective To analyze the current status and occupational factors associated with breastfeeding continuation after maternity leave among female anesthesia residents in China. Methods Our study was based on a nationwide survey 'Breastfeeding and Work/F
Externí odkaz:
https://doaj.org/article/c0d45880436d447c8efb990114a08fdf
Autor:
Wang, Longzheng, Xu, Xiaohan, Zhang, Lei, Lu, Jiarui, Xu, Yongxiu, Xu, Hongbo, Tang, Minghao, Zhang, Chuang
Automatic detection of multimodal misinformation has gained a widespread attention recently. However, the potential of powerful Large Language Models (LLMs) for multimodal misinformation detection remains underexplored. Besides, how to teach LLMs to
Externí odkaz:
http://arxiv.org/abs/2403.14171
Autor:
Xu, Xiaohan, Li, Ming, Tao, Chongyang, Shen, Tao, Cheng, Reynold, Li, Jinyang, Xu, Can, Tao, Dacheng, Zhou, Tianyi
In the era of Large Language Models (LLMs), Knowledge Distillation (KD) emerges as a pivotal methodology for transferring advanced capabilities from leading proprietary LLMs, such as GPT-4, to their open-source counterparts like LLaMA and Mistral. Ad
Externí odkaz:
http://arxiv.org/abs/2402.13116
Autor:
Li, Zhen, Xu, Xiaohan, Shen, Tao, Xu, Can, Gu, Jia-Chen, Lai, Yuxuan, Tao, Chongyang, Ma, Shuai
In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance. This paper ai
Externí odkaz:
http://arxiv.org/abs/2401.07103
Autor:
Xu, Xiaohan, Tao, Chongyang, Shen, Tao, Xu, Can, Xu, Hongbo, Long, Guodong, Lou, Jian-guang, Ma, Shuai
To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, Re2, i.e., \textbf{Re}-\textbf{Re}ading the question as input. Unlike most thought-eliciting promp
Externí odkaz:
http://arxiv.org/abs/2309.06275
Automatic detection of multimodal fake news has gained a widespread attention recently. Many existing approaches seek to fuse unimodal features to produce multimodal news representations. However, the potential of powerful cross-modal contrastive lea
Externí odkaz:
http://arxiv.org/abs/2302.14057
Autor:
Xu, Xiaohan1,2 (AUTHOR) xuxiaohan@qdio.ac.cn, Wang, Yi1,2 (AUTHOR) dengzhuo5015@163.com, Deng, Zhuo1,2 (AUTHOR) wangjin@qdio.ac.cn, Wang, Jin1 (AUTHOR) 15836956680@163.com, Wei, Xile1 (AUTHOR) wangpeng@qdio.ac.cn, Wang, Peng1,2 (AUTHOR) zhangdun@qdio.ac.cn, Zhang, Dun1,2 (AUTHOR)
Publikováno v:
Catalysts (2073-4344). Sep2024, Vol. 14 Issue 9, p645. 14p.
Emotional support conversation (ESC) task can utilize various support strategies to help people relieve emotional distress and overcome the problem they face, which has attracted much attention in these years. However, most state-of-the-art works rel
Externí odkaz:
http://arxiv.org/abs/2210.12640
Inductive link prediction for knowledge graph aims at predicting missing links between unseen entities, those not shown in training stage. Most previous works learn entity-specific embeddings of entities, which cannot handle unseen entities. Recent s
Externí odkaz:
http://arxiv.org/abs/2208.00850