Zobrazeno 1 - 10
of 101
pro vyhledávání: '"Wu, Haoyuan"'
Retrieval augmented generation (RAG) enhances the accuracy and reliability of generative AI models by sourcing factual information from external databases, which is extensively employed in document-grounded question-answering (QA) tasks. Off-the-shel
Externí odkaz:
http://arxiv.org/abs/2407.15353
Autor:
Cao, Ruisheng, Lei, Fangyu, Wu, Haoyuan, Chen, Jixuan, Fu, Yeqiao, Gao, Hongcheng, Xiong, Xinzhuang, Zhang, Hanchong, Mao, Yuchen, Hu, Wenjing, Xie, Tianbao, Xu, Hongshen, Zhang, Danyang, Wang, Sida, Sun, Ruoxi, Yin, Pengcheng, Xiong, Caiming, Ni, Ansong, Liu, Qian, Zhong, Victor, Chen, Lu, Yu, Kai, Yu, Tao
Data science and engineering workflows often span multiple stages, from warehousing to orchestration, using tools like BigQuery, dbt, and Airbyte. As vision language models (VLMs) advance in multimodal understanding and code generation, VLM-based age
Externí odkaz:
http://arxiv.org/abs/2407.10956
Autor:
Su, Hongjin, Jiang, Shuyang, Lai, Yuhang, Wu, Haoyuan, Shi, Boao, Liu, Che, Liu, Qian, Yu, Tao
Recently the retrieval-augmented generation (RAG) paradigm has raised much attention for its potential in incorporating external knowledge into large language models (LLMs) without further training. While widely explored in natural language applicati
Externí odkaz:
http://arxiv.org/abs/2402.12317
Large language models (LLMs) have demonstrated considerable proficiency in general natural language processing (NLP) tasks. Instruction tuning, a successful paradigm, enhances the ability of LLMs to follow natural language instructions and exhibit ro
Externí odkaz:
http://arxiv.org/abs/2401.02731
Vision-Language models (VLMs) pre-trained on large corpora have demonstrated notable success across a range of downstream tasks. In light of the rapidly increasing size of pre-trained VLMs, parameter-efficient transfer learning (PETL) has garnered at
Externí odkaz:
http://arxiv.org/abs/2312.10613
The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers. Recent advancements in large language models (LLMs) have showcased their exceptional capabilities in
Externí odkaz:
http://arxiv.org/abs/2308.10204
Autor:
Jing, Bo1,2 (AUTHOR), Wu, Haoyuan1,2 (AUTHOR), Liu, Zhenhua3 (AUTHOR), Shang, Hongmei1 (AUTHOR), Li, Yuting1,2 (AUTHOR), Jin, Zhouyu1,2 (AUTHOR) jinzhouyund@163.com, Song, Hui1,2,4 (AUTHOR) songhuisk@jlau.edu.cn
Publikováno v:
Journal of Applied Animal Research. Dec2024, p1-13. 13p. 3 Illustrations, 8 Charts.
Autor:
Yang, Zhihao1,2 (AUTHOR), Wu, HaoYuan1,2 (AUTHOR), Wang, ZhiWei1,2 (AUTHOR), Bian, ErBao1,2 (AUTHOR) aydbeb@126.com, Zhao, Bing1,2 (AUTHOR) aydzhb@126.com
Publikováno v:
Cancer Cell International. 6/29/2024, Vol. 24 Issue 1, p1-16. 16p.
Autor:
Wu, HaoYuan1 (AUTHOR), Yang, ZhiHao1 (AUTHOR), Chang, ChenXi1 (AUTHOR), Wang, ZhiWei1 (AUTHOR), Zhang, DeRan1 (AUTHOR), Guo, QingGuo1 (AUTHOR), Zhao, Bing1 (AUTHOR) aydzhb@126.com
Publikováno v:
Cancer Cell International. 5/11/2024, Vol. 24 Issue 1, p1-15. 15p.
Autor:
Wu, Haoyuan, Bai, Xiaolei, Li, Lei, Li, Zhaoxin, Wang, Mengyu, Zhang, Zhongguo, Zhu, Cheng, Xu, Yuanmin, Xiong, Huiqin, Xie, Xin, Tian, Xiujun, Li, Jiuyi
Publikováno v:
In Environmental Research 1 December 2024 262 Part 2