Zobrazeno 1 - 10
of 943
pro vyhledávání: '"HUANG Wenhao"'
Publikováno v:
EPJ Web of Conferences, Vol 295, p 03025 (2024)
The Super Tau Charm Facility (STCF) proposed in China is a newgeneration electron–positron collider with center-of-mass energies covering 2-7 GeV and a peak luminosity of 0.5 1035 cm−2 s−1. The offline software of STCF (OSCAR) is developed to s
Externí odkaz:
https://doaj.org/article/7b8b136161d648d5be29af7a97e08589
Publikováno v:
康复学报, Vol 32, Pp 111-116 (2022)
Objective: To observe the effect of upper limb rehabilitation robot combined with upper limb rehabilitation training on motor function of upper extremity of hemiplegic patients in recovery stage of stroke.MethodsA total of 40 hemiplegic patients in r
Externí odkaz:
https://doaj.org/article/aafd6c2bd262472aa8182ed108bc2da8
Autor:
Li, Ziming, Zang, Qianbo, Ma, David, Guo, Jiawei, Zheng, Tuney, Liu, Minghao, Niu, Xinyao, Wang, Yue, Yang, Jian, Liu, Jiaheng, Zhong, Wanjun, Zhou, Wangchunshu, Huang, Wenhao, Zhang, Ge
Data science tasks involving tabular data present complex challenges that require sophisticated problem-solving approaches. We propose AutoKaggle, a powerful and user-centric framework that assists data scientists in completing daily data pipelines t
Externí odkaz:
http://arxiv.org/abs/2410.20424
Autor:
Zhang, Chenhao, Feng, Xi, Bai, Yuelin, Du, Xinrun, Hou, Jinchang, Deng, Kaixin, Han, Guangzeng, Li, Qinrui, Wang, Bingli, Liu, Jiaheng, Qu, Xingwei, Zhang, Yifei, Zhao, Qixuan, Liang, Yiming, Liu, Ziqiang, Fang, Feiteng, Yang, Min, Huang, Wenhao, Lin, Chenghua, Zhang, Ge, Ni, Shiwen
As the capabilities of Multimodal Large Language Models (MLLMs) continue to improve, the need for higher-order capability evaluation of MLLMs is increasing. However, there is a lack of work evaluating MLLM for higher-order perception and understandin
Externí odkaz:
http://arxiv.org/abs/2410.13854
Autor:
Wang, Zekun Moore, Wang, Shawn, Zhu, Kang, Liu, Jiaheng, Xu, Ke, Fu, Jie, Zhou, Wangchunshu, Huang, Wenhao
Alignment of large language models (LLMs) involves training models on preference-contrastive output pairs to adjust their responses according to human preferences. To obtain such contrastive pairs, traditional methods like RLHF and RLAIF rely on limi
Externí odkaz:
http://arxiv.org/abs/2410.13785
Autor:
Wu, Siwei, Peng, Zhongyuan, Du, Xinrun, Zheng, Tuney, Liu, Minghao, Wu, Jialong, Ma, Jiachen, Li, Yizhi, Yang, Jian, Zhou, Wangchunshu, Lin, Qunshu, Zhao, Junbo, Zhang, Zhaoxiang, Huang, Wenhao, Zhang, Ge, Lin, Chenghua, Liu, J. H.
Enabling Large Language Models (LLMs) to handle a wider range of complex tasks (e.g., coding, math) has drawn great attention from many researchers. As LLMs continue to evolve, merely increasing the number of model parameters yields diminishing perfo
Externí odkaz:
http://arxiv.org/abs/2410.13639
As multimodal large language models (MLLMs) continue to demonstrate increasingly competitive performance across a broad spectrum of tasks, more intricate and comprehensive benchmarks have been developed to assess these cutting-edge models. These benc
Externí odkaz:
http://arxiv.org/abs/2410.06555
Large Language Models (LLMs) demonstrate impressive capabilities across various domains, including role-playing, creative writing, mathematical reasoning, and coding. Despite these advancements, LLMs still encounter challenges with length control, fr
Externí odkaz:
http://arxiv.org/abs/2410.07035
Autor:
Ma, Kaijing, Du, Xinrun, Wang, Yunran, Zhang, Haoran, Wen, Zhoufutu, Qu, Xingwei, Yang, Jian, Liu, Jiaheng, Liu, Minghao, Yue, Xiang, Huang, Wenhao, Zhang, Ge
In this paper, we introduce Knowledge-Orthogonal Reasoning (KOR), which minimizes the impact of domain-specific knowledge for a more accurate evaluation of models' reasoning abilities in out-of-distribution scenarios. Based on this concept, we propos
Externí odkaz:
http://arxiv.org/abs/2410.06526
Autor:
Wang, Zekun, Zhu, King, Xu, Chunpu, Zhou, Wangchunshu, Liu, Jiaheng, Zhang, Yibo, Wang, Jiashuo, Shi, Ning, Li, Siyu, Li, Yizhi, Que, Haoran, Zhang, Zhaoxiang, Zhang, Yuanxing, Zhang, Ge, Xu, Ke, Fu, Jie, Huang, Wenhao
In this paper, we introduce MIO, a novel foundation model built on multimodal tokens, capable of understanding and generating speech, text, images, and videos in an end-to-end, autoregressive manner. While the emergence of large language models (LLMs
Externí odkaz:
http://arxiv.org/abs/2409.17692