Zobrazeno 1 - 10
of 482
pro vyhledávání: '"Zhang Qinglong"'
Publikováno v:
Journal of Laboratory Medicine, Vol 47, Iss 5, Pp 187-197 (2023)
The present study was to evaluate the diagnostic accuracy of different types of PCR tests with the aim of determining which one performs best for detecting Helicobacter pylori in stool samples. Related articles were searched from PubMed, Embase, Web
Externí odkaz:
https://doaj.org/article/f2bc20dbb8b14ec4ad4de595ecbf19a8
Autor:
Huang, Ziyuan, Ji, Kaixiang, Gong, Biao, Qing, Zhiwu, Zhang, Qinglong, Zheng, Kecheng, Wang, Jian, Chen, Jingdong, Yang, Ming
This paper introduces Chain-of-Sight, a vision-language bridge module that accelerates the pre-training of Multimodal Large Language Models (MLLMs). Our approach employs a sequence of visual resamplers that capture visual details at various spacial s
Externí odkaz:
http://arxiv.org/abs/2407.15819
Autor:
Mu, Yao, Chen, Junting, Zhang, Qinglong, Chen, Shoufa, Yu, Qiaojun, Ge, Chongjian, Chen, Runjian, Liang, Zhixuan, Hu, Mengkang, Tao, Chaofan, Sun, Peize, Yu, Haibao, Yang, Chao, Shao, Wenqi, Wang, Wenhai, Dai, Jifeng, Qiao, Yu, Ding, Mingyu, Luo, Ping
Robotic behavior synthesis, the problem of understanding multimodal inputs and generating precise physical control for robots, is an important part of Embodied AI. Despite successes in applying multimodal large language models for high-level understa
Externí odkaz:
http://arxiv.org/abs/2402.16117
Autor:
Chen, Zhe, Wu, Jiannan, Wang, Wenhai, Su, Weijie, Chen, Guo, Xing, Sen, Zhong, Muyan, Zhang, Qinglong, Zhu, Xizhou, Lu, Lewei, Li, Bin, Luo, Ping, Lu, Tong, Qiao, Yu, Dai, Jifeng
The exponential growth of large language models (LLMs) has opened up numerous possibilities for multimodal AGI systems. However, the progress in vision and vision-language foundation models, which are also critical elements of multi-modal AGI, has no
Externí odkaz:
http://arxiv.org/abs/2312.14238
Autor:
Mu, Yao, Zhang, Qinglong, Hu, Mengkang, Wang, Wenhai, Ding, Mingyu, Jin, Jun, Wang, Bin, Dai, Jifeng, Qiao, Yu, Luo, Ping
Embodied AI is a crucial frontier in robotics, capable of planning and executing action sequences for robots to accomplish long-horizon tasks in physical environments. In this work, we introduce EmbodiedGPT, an end-to-end multi-modal foundation model
Externí odkaz:
http://arxiv.org/abs/2305.15021
Autor:
Liu, Zhaoyang, He, Yinan, Wang, Wenhai, Wang, Weiyun, Wang, Yi, Chen, Shoufa, Zhang, Qinglong, Lai, Zeqiang, Yang, Yang, Li, Qingyun, Yu, Jiashuo, Li, Kunchang, Chen, Zhe, Yang, Xue, Zhu, Xizhou, Wang, Yali, Wang, Limin, Luo, Ping, Dai, Jifeng, Qiao, Yu
We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to
Externí odkaz:
http://arxiv.org/abs/2305.05662
Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the ch
Externí odkaz:
http://arxiv.org/abs/2212.01738
Autor:
Zhang, Qinglong1 (AUTHOR) zhangqinglong@mail.dlut.edu.cn, Yan, Yingying1 (AUTHOR) yanyingying@mail.dlut.edu.cn, Cai, Rui1 (AUTHOR) cairui@dlut.edu.cn, Li, Xiao-Na2 (AUTHOR) klieee@dlut.edu.cn, Liu, Chun1 (AUTHOR) klieee@dlut.edu.cn
Publikováno v:
Materials (1996-1944). Sep2024, Vol. 17 Issue 17, p4366. 13p.
Autor:
Yu, Yue, Liu, Zhihua, Wang, Wenjuan, Xu, Wenru, Lv, Qiushuang, Li, Kaili, Guo, Wenhua, Fang, Lei, Zhang, Qinglong, Wu, Zhiwei, Liu, Bo
Publikováno v:
In Ecological Indicators November 2024 168
Publikováno v:
In Tunnelling and Underground Space Technology incorporating Trenchless Technology Research October 2024 152