Zobrazeno 1 - 10
of 206
pro vyhledávání: '"Wang, Tiannan"'
Autor:
Zhou, Wangchunshu, Ou, Yixin, Ding, Shengwei, Li, Long, Wu, Jialong, Wang, Tiannan, Chen, Jiamin, Wang, Shuai, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While languag
Externí odkaz:
http://arxiv.org/abs/2406.18532
Autor:
Wang, Tiannan, Chen, Jiamin, Jia, Qingrui, Wang, Shuai, Fang, Ruoyu, Wang, Huilin, Gao, Zhaowei, Xie, Chunzhao, Xu, Chuou, Dai, Jihong, Liu, Yibin, Wu, Jialong, Ding, Shengwei, Li, Long, Huang, Zhiwei, Deng, Xinle, Yu, Teng, Ma, Gangan, Xiao, Han, Chen, Zixin, Xiang, Danjun, Wang, Yunxia, Zhu, Yuanyuan, Xiao, Yi, Wang, Jing, Wang, Yiru, Ding, Siran, Huang, Jiayang, Xu, Jiayi, Tayier, Yilihamu, Hu, Zhenyu, Gao, Yuan, Zheng, Chengfeng, Ye, Yueshu, Li, Yihang, Wan, Lei, Jiang, Xinyue, Wang, Yujie, Cheng, Siyu, Song, Zhule, Tang, Xiangru, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor, Zhou, Wangchunshu
This work introduces Weaver, our first family of large language models (LLMs) dedicated to content creation. Weaver is pre-trained on a carefully selected corpus that focuses on improving the writing capabilities of large language models. We then fin
Externí odkaz:
http://arxiv.org/abs/2401.17268
Autor:
Zhou, Wangchunshu, Jiang, Yuchen Eleanor, Li, Long, Wu, Jialong, Wang, Tiannan, Qiu, Shi, Zhang, Jintian, Chen, Jing, Wu, Ruipu, Wang, Shuai, Zhu, Shiding, Chen, Jiyu, Zhang, Wentao, Tang, Xiangru, Zhang, Ningyu, Chen, Huajun, Cui, Peng, Sachan, Mrinmaya
Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and interact with environments, humans, and other agents using natural language interface
Externí odkaz:
http://arxiv.org/abs/2309.07870
Autor:
Zhou, Wangchunshu, Jiang, Yuchen Eleanor, Cui, Peng, Wang, Tiannan, Xiao, Zhenxin, Hou, Yifan, Cotterell, Ryan, Sachan, Mrinmaya
The fixed-size context of Transformer makes GPT models incapable of generating arbitrarily long text. In this paper, we introduce RecurrentGPT, a language-based simulacrum of the recurrence mechanism in RNNs. RecurrentGPT is built upon a large langua
Externí odkaz:
http://arxiv.org/abs/2305.13304
Autor:
Chen, Zhihong, Jiang, Feng, Chen, Junying, Wang, Tiannan, Yu, Fei, Chen, Guiming, Zhang, Hongbo, Liang, Juhao, Zhang, Chen, Zhang, Zhiyi, Li, Jianquan, Wan, Xiang, Wang, Benyou, Li, Haizhou
This paper presents our efforts to democratize ChatGPT across language. We release a large language model "Phoenix", achieving competitive performance among open-source English and Chinese models while excelling in languages with limited resources (c
Externí odkaz:
http://arxiv.org/abs/2304.10453
Pre-trained vision-language models (VLMs) have achieved impressive results in a range of vision-language tasks. However, popular VLMs usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and deployment in real
Externí odkaz:
http://arxiv.org/abs/2210.07795
Autor:
Wang, Fang, Li, Wenhui, Gao, Yamiao, Zhu, Lizhen, Chen, Haonan, Yang, Liu, Weil, Ray R., Wang, Tiannan, Nan, Xiongxiong
Publikováno v:
In Agriculture, Ecosystems and Environment 1 September 2024 371
Publikováno v:
In Human Pathology Reports June 2024 36
Autor:
Zhong, Fangfang, Wang, Tiannan, Li, Wenzhi, Zhang, Huina, Zeng, Xianxu, Geisler, Daniel, Zhou, Xianrong, Cong, Qing, Sui, Long, Tao, Xiang, Zhao, Chengquan
Publikováno v:
In Laboratory Investigation April 2024 104(4)
Autor:
Tang, Xiao, Zhang, Huina, Wang, Tiannan, Jiang, Wei, Jones, Terri E., He, Yanmei, Li, Lei, Tong, Lingling, Wang, Cheng, Wang, Wei, Yang, Kaixuan, Yin, Rutie, Zhao, Chengquan
Publikováno v:
In Laboratory Investigation November 2023 103(11)