Zobrazeno 1 - 10
of 2 480
pro vyhledávání: '"Sun Xing"'
Publikováno v:
Shipin yu jixie, Vol 39, Iss 3, Pp 78-84,146 (2023)
Objective: To improve the separation efficiency of constricted clam shell and meat. Methods: Taking Lianyungang double-headed razor as the test object, taking frequency, amplitude, and sieve surface inclination as the test factors, and taking the scr
Externí odkaz:
https://doaj.org/article/1d0515b5617143d9a9f3c3d89f4076df
Autor:
Yin, Shukang, Fu, Chaoyou, Zhao, Sirui, Shen, Yunhang, Ge, Chunjiang, Yang, Yan, Long, Zuwei, Dai, Yuhan, Xu, Tong, Sun, Xing, He, Ran, Shan, Caifeng, Chen, Enhong
The success of Multimodal Large Language Models (MLLMs) in the image domain has garnered wide attention from the research community. Drawing on previous successful experiences, researchers have recently explored extending the success to the video und
Externí odkaz:
http://arxiv.org/abs/2411.19951
Autor:
Fu, Chaoyou, Zhang, Yi-Fan, Yin, Shukang, Li, Bo, Fang, Xinyu, Zhao, Sirui, Duan, Haodong, Sun, Xing, Liu, Ziwei, Wang, Liang, Shan, Caifeng, He, Ran
As a prominent direction of Artificial General Intelligence (AGI), Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia. Building upon pre-trained LLMs, this family of models further develops mult
Externí odkaz:
http://arxiv.org/abs/2411.15296
Rapidly developing large language models (LLMs) have brought tremendous intelligent applications. Especially, the GPT-4o's excellent duplex speech interaction ability has brought impressive experience to users. Researchers have recently proposed seve
Externí odkaz:
http://arxiv.org/abs/2411.00774
Publikováno v:
Liang you shipin ke-ji, Vol 29, Iss 1, Pp 122-130 (2021)
To explore the anti-hepatocellular mechanism of five kinds of isoquinoline alkaloids, with the help of TCMSP, string, venny database and WebGestalt online analysis software to obtain the target and perform protein interaction network (PPI), g
Externí odkaz:
https://doaj.org/article/4ae585a59c2b40b5a9cbe2339bde9292
Autor:
Liu, Wenhao, An, Siyu, Lu, Junru, Wu, Muling, Li, Tianlong, Wang, Xiaohua, Zheng, Xiaoqing, Yin, Di, Sun, Xing, Huang, Xuanjing
Role-Playing Agents (RPAs) have shown remarkable performance in various applications, yet they often struggle to recognize and appropriately respond to hard queries that conflict with their role-play knowledge. To investigate RPAs' performance when f
Externí odkaz:
http://arxiv.org/abs/2409.16913
Autor:
Zhang, Qian-Wen, Wang, Haochen, Li, Fang, An, Siyu, Qiao, Lingfeng, Gao, Liangcai, Yin, Di, Sun, Xing
Online education platforms have significantly transformed the dissemination of educational resources by providing a dynamic and digital infrastructure. With the further enhancement of this transformation, the advent of Large Language Models (LLMs) ha
Externí odkaz:
http://arxiv.org/abs/2409.16202
Autor:
Yang, Yuncheng, Qin, Yulei, Wu, Tong, Xu, Zihan, Li, Gang, Guo, Pengcheng, Shao, Hang, Shi, Yuchen, Li, Ke, Sun, Xing, Yang, Jie, Gu, Yun
The cultivation of expertise for large language models (LLMs) to solve tasks of specific areas often requires special-purpose tuning with calibrated behaviors on the expected stable outputs. To avoid huge cost brought by manual preparation of instruc
Externí odkaz:
http://arxiv.org/abs/2408.15915
Autor:
Fu, Chaoyou, Lin, Haojia, Long, Zuwei, Shen, Yunhang, Zhao, Meng, Zhang, Yifan, Dong, Shaoqi, Wang, Xiong, Yin, Di, Ma, Long, Zheng, Xiawu, He, Ran, Ji, Rongrong, Wu, Yunsheng, Shan, Caifeng, Sun, Xing
The remarkable multimodal capabilities and interactive experience of GPT-4o underscore their necessity in practical applications, yet open-source models rarely excel in both areas. In this paper, we introduce VITA, the first-ever open-source Multimod
Externí odkaz:
http://arxiv.org/abs/2408.05211
Autor:
Qin, Yulei, Yang, Yuncheng, Guo, Pengcheng, Li, Gang, Shao, Hang, Shi, Yuchen, Xu, Zihan, Gu, Yun, Li, Ke, Sun, Xing
Instruction tuning plays a critical role in aligning large language models (LLMs) with human preference. Despite the vast amount of open instruction datasets, naively training a LLM on all existing instructions may not be optimal and practical. To pi
Externí odkaz:
http://arxiv.org/abs/2408.02085