Zobrazeno 1 - 10
of 825
pro vyhledávání: '"Yan HANG"'
Publikováno v:
Zhongguo shuxue zazhi, Vol 37, Iss 1, Pp 58-62 (2024)
Objective To analyze the influence of plasma donation on human total protein level and the impact of different blood collection tubes on total protein level detection. Methods A total of 1 373 plasma donors from 11 apheresis plasma stations in 6 prov
Externí odkaz:
https://doaj.org/article/708e60c6c6f147d59b166497210657a3
Publikováno v:
IEEE Access, Vol 12, Pp 25146-25163 (2024)
The identification of soybean disease images in natural scenes has been a challenging task due to their complex backgrounds and diverse spot patterns. Traditional single convolutional neural network (CNN) for soybean disease image recognition often c
Externí odkaz:
https://doaj.org/article/dbee216a182e4f1486d422e5acf090a8
Publikováno v:
E3S Web of Conferences, Vol 528, p 02014 (2024)
ISO 26262 provides testing requirements for functional safety development and testing to mitigate unacceptable risks arising from system functional failures. Fault injection plays a vital role in assessing system robustness and validating the efficac
Externí odkaz:
https://doaj.org/article/9cff3a0f518e4e15bb800b6144839f53
Autor:
Chen, Zhi, Chen, Qiguang, Qin, Libo, Guo, Qipeng, Lv, Haijun, Zou, Yicheng, Che, Wanxiang, Yan, Hang, Chen, Kai, Lin, Dahua
Recent advancements in large language models (LLMs) with extended context windows have significantly improved tasks such as information extraction, question answering, and complex planning scenarios. In order to achieve success in long context tasks,
Externí odkaz:
http://arxiv.org/abs/2409.01893
Autor:
Duan, Jiangfei, Zhang, Shuo, Wang, Zerui, Jiang, Lijuan, Qu, Wenwen, Hu, Qinghao, Wang, Guoteng, Weng, Qizhen, Yan, Hang, Zhang, Xingcheng, Qiu, Xipeng, Lin, Dahua, Wen, Yonggang, Jin, Xin, Zhang, Tianwei, Sun, Peng
Large Language Models (LLMs) like GPT and LLaMA are revolutionizing the AI industry with their sophisticated capabilities. Training these models requires vast GPU clusters and significant computing time, posing major challenges in terms of scalabilit
Externí odkaz:
http://arxiv.org/abs/2407.20018
Autor:
Liu, Xiaoran, Guo, Qipeng, Song, Yuerong, Liu, Zhigeng, Lv, Kai, Yan, Hang, Li, Linlin, Liu, Qun, Qiu, Xipeng
The maximum supported context length is a critical bottleneck limiting the practical application of the Large Language Model (LLM). Although existing length extrapolation methods can extend the context of LLMs to millions of tokens, these methods all
Externí odkaz:
http://arxiv.org/abs/2407.15176
Autor:
Shao, Yunfan, Li, Linyang, Ma, Yichuan, Li, Peiji, Song, Demin, Cheng, Qinyuan, Li, Shimin, Li, Xiaonan, Wang, Pengyu, Guo, Qipeng, Yan, Hang, Qiu, Xipeng, Huang, Xuanjing, Lin, Dahua
Complex reasoning is an impressive ability shown by large language models (LLMs). Most LLMs are skilled in deductive reasoning, such as chain-of-thought prompting or iterative tool-using to solve challenging tasks step-by-step. In this paper, we hope
Externí odkaz:
http://arxiv.org/abs/2407.12504
Autor:
Zhang, Pan, Dong, Xiaoyi, Zang, Yuhang, Cao, Yuhang, Qian, Rui, Chen, Lin, Guo, Qipeng, Duan, Haodong, Wang, Bin, Ouyang, Linke, Zhang, Songyang, Zhang, Wenwei, Li, Yining, Gao, Yang, Sun, Peng, Zhang, Xinyue, Li, Wei, Li, Jingwen, Wang, Wenhai, Yan, Hang, He, Conghui, Zhang, Xingcheng, Chen, Kai, Dai, Jifeng, Qiao, Yu, Lin, Dahua, Wang, Jiaqi
We present InternLM-XComposer-2.5 (IXC-2.5), a versatile large-vision language model that supports long-contextual input and output. IXC-2.5 excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities
Externí odkaz:
http://arxiv.org/abs/2407.03320
Autor:
Cheng, Qinyuan, Li, Xiaonan, Li, Shimin, Zhu, Qin, Yin, Zhangyue, Shao, Yunfan, Li, Linyang, Sun, Tianxiang, Yan, Hang, Qiu, Xipeng
In Retrieval-Augmented Generation (RAG), retrieval is not always helpful and applying it to every instruction is sub-optimal. Therefore, determining whether to retrieve is crucial for RAG, which is usually referred to as Active Retrieval. However, ex
Externí odkaz:
http://arxiv.org/abs/2406.12534
Autor:
Song, Zifan, Wang, Yudong, Zhang, Wenwei, Liu, Kuikun, Lyu, Chengqi, Song, Demin, Guo, Qipeng, Yan, Hang, Lin, Dahua, Chen, Kai, Zhao, Cairong
Open-source Large Language Models (LLMs) and their specialized variants, particularly Code LLMs, have recently delivered impressive performance. However, previous Code LLMs are typically fine-tuned on single-source data with limited quality and diver
Externí odkaz:
http://arxiv.org/abs/2405.19265