Zobrazeno 1 - 10
of 10 200
pro vyhledávání: '"Chen Bei"'
Autor:
AI, 01., Wake, Alan, Wang, Albert, Chen, Bei, Lv, C. X., Li, Chao, Huang, Chengen, Cai, Chenglin, Zheng, Chujie, Cooper, Daniel, Dai, Ethan, Zhou, Fan, Hu, Feng, Ji, Heng, Qiu, Howard, Zhu, Jiangcheng, Tian, Jun, Su, Katherine, Zhang, Lihuan, Li, Liying, Song, Ming, Li, Mou, Liu, Peng, Hu, Qicheng, Wang, Shawn, Zhou, Shijun, Li, Shiyong, Zhu, Tianhang, Xie, Wen, He, Xiang, Chen, Xiaobo, Hu, Xiaohui, Ren, Xiaoyi, Niu, Xinyao, Li, Yanpeng, Zhao, Yongke, Luo, Yongzhen, Xu, Yuchi, Sha, Yuxuan, Yan, Zhaodong, Liu, Zhiyuan, Zhang, Zirui
This technical report presents Yi-Lightning, our latest flagship large language model (LLM). It achieves exceptional performance, ranking 6th overall on Chatbot Arena, with particularly strong results (2nd to 4th place) in specialized categories incl
Externí odkaz:
http://arxiv.org/abs/2412.01253
Autor:
Zhang, Fengji, Wu, Linquan, Bai, Huiyu, Lin, Guancheng, Li, Xiao, Yu, Xiao, Wang, Yue, Chen, Bei, Keung, Jacky
Coding tasks have been valuable for evaluating Large Language Models (LLMs), as they demand the comprehension of high-level instructions, complex reasoning, and the implementation of functional programs -- core capabilities for advancing Artificial G
Externí odkaz:
http://arxiv.org/abs/2410.12381
Autor:
Li, Dongxu, Liu, Yudong, Wu, Haoning, Wang, Yue, Shen, Zhiqi, Qu, Bowen, Niu, Xinyao, Wang, Guoyin, Chen, Bei, Li, Junnan
Information comes in diverse modalities. Multimodal native AI models are essential to integrate real-world information and deliver comprehensive understanding. While proprietary multimodal native models exist, their lack of openness imposes obstacles
Externí odkaz:
http://arxiv.org/abs/2410.05993
Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessme
Externí odkaz:
http://arxiv.org/abs/2409.12656
Addressing critical challenges in Lamb wave resonators, this paper presents the first validation of resonators incorporating sub-wavelength through-holes. Using the A3 mode resonator based on a LiNbO3 single-crystal thin film and operating in the K b
Externí odkaz:
http://arxiv.org/abs/2409.00783
Large multimodal models (LMMs) are processing increasingly longer and richer inputs. Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark
Externí odkaz:
http://arxiv.org/abs/2407.15754
Autor:
Wang, Junjie, Zhang, Yin, Ji, Yatai, Zhang, Yuxiang, Jiang, Chunyang, Wang, Yubo, Zhu, Kang, Wang, Zekun, Wang, Tiezhen, Huang, Wenhao, Fu, Jie, Chen, Bei, Lin, Qunshu, Liu, Minghao, Zhang, Ge, Chen, Wenhu
Recent advancements in Large Multimodal Models (LMMs) have leveraged extensive multimodal datasets to enhance capabilities in complex knowledge-driven tasks. However, persistent challenges in perceptual and reasoning errors limit their efficacy, part
Externí odkaz:
http://arxiv.org/abs/2406.13923
Autor:
Qin, Zhen-Hui, Wu, Shu-Mao, Hao, Chen-Bei, Chen, Hua-Yang, Liang, Sheng-Nan, Yu, Si-Yuan, Chen, Yan-Feng
This work proposes a double-layer thin-film lithium niobate (LiNbO3) longitudinally excited shear wave resonator with a theoretical electromechanical coupling coefficient exceeding 60%, RaR close to 28%, and no spurious modes. This ultra-large electr
Externí odkaz:
http://arxiv.org/abs/2405.17168
Autor:
AI, 01., Young, Alex, Chen, Bei, Li, Chao, Huang, Chengen, Zhang, Ge, Zhang, Guanwei, Li, Heng, Zhu, Jiangcheng, Chen, Jianqun, Chang, Jing, Yu, Kaidong, Liu, Peng, Liu, Qiang, Yue, Shawn, Yang, Senbin, Yang, Shiming, Yu, Tao, Xie, Wen, Huang, Wenhao, Hu, Xiaohui, Ren, Xiaoyi, Niu, Xinyao, Nie, Pengcheng, Xu, Yuchi, Liu, Yudong, Wang, Yue, Cai, Yuxuan, Gu, Zhenyu, Liu, Zhiyuan, Dai, Zonghong
We introduce the Yi model family, a series of language and multimodal models that demonstrate strong multi-dimensional capabilities. The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long
Externí odkaz:
http://arxiv.org/abs/2403.04652
Autor:
Zhang, Ge, Du, Xinrun, Chen, Bei, Liang, Yiming, Luo, Tongxu, Zheng, Tianyu, Zhu, Kang, Cheng, Yuyang, Xu, Chunpu, Guo, Shuyue, Zhang, Haoran, Qu, Xingwei, Wang, Junjie, Yuan, Ruibin, Li, Yizhi, Wang, Zekun, Liu, Yudong, Tsai, Yu-Hsuan, Zhang, Fengji, Lin, Chenghua, Huang, Wenhao, Fu, Jie
As the capabilities of large multimodal models (LMMs) continue to advance, evaluating the performance of LMMs emerges as an increasing need. Additionally, there is an even larger gap in evaluating the advanced knowledge and reasoning abilities of LMM
Externí odkaz:
http://arxiv.org/abs/2401.11944