Zobrazeno 1 - 10
of 1 373
pro vyhledávání: '"Liu Yudong"'
Publikováno v:
Xibei Gongye Daxue Xuebao, Vol 41, Iss 3, Pp 574-578 (2023)
The traditional MUSIC algorithm needs to know the number of target signal sources in advance, and further determine the dimensions of signal subspace and noise subspace, and finally search for spectral peaks. In engineering, it is impossible to predi
Externí odkaz:
https://doaj.org/article/dd85ca526c5d4ee6b4f835ef9c4d787f
Autor:
Lin, Yueqian, Fu, Yuzhe, Zhang, Jingyang, Liu, Yudong, Zhang, Jianyi, Sun, Jingwei, Li, Hai "Helen", Chen, Yiran
We introduce Speech Information Retrieval (SIR), a new long-context task for Speech Large Language Models (Speech LLMs), and present SPIRAL, a 1,012-sample benchmark testing models' ability to extract critical details from approximately 90-second spo
Externí odkaz:
http://arxiv.org/abs/2412.12009
Let $\overline X$ be a smooth rigid variety over $C=\mathbb C_p$ admitting a lift $X$ over $B_{dR}^+$. In this paper, we use the stacky language to prove a nilpotent $p$-adic Riemann-Hilbert correspondence. After introducing the moduli stack of $\mat
Externí odkaz:
http://arxiv.org/abs/2411.10165
Autor:
Zhang, Tunhou, Cheng, Dehua, He, Yuchen, Chen, Zhengxing, Dai, Xiaoliang, Xiong, Liang, Liu, Yudong, Cheng, Feng, Cao, Yufan, Yan, Feng, Li, Hai, Chen, Yiran, Wen, Wei
Publikováno v:
ACM Transactions on Recommender Systems (TORS) 2024
The increasing popularity of deep learning models has created new opportunities for developing AI-based recommender systems. Designing recommender systems using deep neural networks requires careful architecture design, and further optimization deman
Externí odkaz:
http://arxiv.org/abs/2411.07569
Autor:
Li, Dongxu, Liu, Yudong, Wu, Haoning, Wang, Yue, Shen, Zhiqi, Qu, Bowen, Niu, Xinyao, Zhou, Fan, Huang, Chengen, Li, Yanpeng, Zhu, Chongyan, Ren, Xiaoyi, Li, Chao, Ye, Yifan, Zhang, Lihuan, Yan, Hanshu, Wang, Guoyin, Chen, Bei, Li, Junnan
Information comes in diverse modalities. Multimodal native AI models are essential to integrate real-world information and deliver comprehensive understanding. While proprietary multimodal native models exist, their lack of openness imposes obstacles
Externí odkaz:
http://arxiv.org/abs/2410.05993
Let $C$ be an algebraically closed perfectoid field over $\mathbb{Q}_p$ with the ring of integer $\mathcal{O}_C$ and the infinitesimal thickening $\Ainf$. Let $\mathfrak X$ be a semi-stable formal scheme over $\mathcal{O}_C$ with a fixed flat lifting
Externí odkaz:
http://arxiv.org/abs/2409.08785
Neural signed distance functions (SDFs) have shown powerful ability in fitting the shape geometry. However, inferring continuous signed distance fields from discrete unoriented point clouds still remains a challenge. The neural network typically fits
Externí odkaz:
http://arxiv.org/abs/2407.13342
Fine-tuning large language models (LLMs) requires significant memory, often exceeding the capacity of a single GPU. A common solution to this memory challenge is offloading compute and data from the GPU to the CPU. However, this approach is hampered
Externí odkaz:
http://arxiv.org/abs/2406.10181
Autor:
AI, 01., Young, Alex, Chen, Bei, Li, Chao, Huang, Chengen, Zhang, Ge, Zhang, Guanwei, Li, Heng, Zhu, Jiangcheng, Chen, Jianqun, Chang, Jing, Yu, Kaidong, Liu, Peng, Liu, Qiang, Yue, Shawn, Yang, Senbin, Yang, Shiming, Yu, Tao, Xie, Wen, Huang, Wenhao, Hu, Xiaohui, Ren, Xiaoyi, Niu, Xinyao, Nie, Pengcheng, Xu, Yuchi, Liu, Yudong, Wang, Yue, Cai, Yuxuan, Gu, Zhenyu, Liu, Zhiyuan, Dai, Zonghong
We introduce the Yi model family, a series of language and multimodal models that demonstrate strong multi-dimensional capabilities. The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long
Externí odkaz:
http://arxiv.org/abs/2403.04652
Autor:
Zhang, Ge, Du, Xinrun, Chen, Bei, Liang, Yiming, Luo, Tongxu, Zheng, Tianyu, Zhu, Kang, Cheng, Yuyang, Xu, Chunpu, Guo, Shuyue, Zhang, Haoran, Qu, Xingwei, Wang, Junjie, Yuan, Ruibin, Li, Yizhi, Wang, Zekun, Liu, Yudong, Tsai, Yu-Hsuan, Zhang, Fengji, Lin, Chenghua, Huang, Wenhao, Fu, Jie
As the capabilities of large multimodal models (LMMs) continue to advance, evaluating the performance of LMMs emerges as an increasing need. Additionally, there is an even larger gap in evaluating the advanced knowledge and reasoning abilities of LMM
Externí odkaz:
http://arxiv.org/abs/2401.11944