Zobrazeno 1 - 10
of 7 196
pro vyhledávání: '"Wang, JiaQi"'
Autor:
Wang, Jiaqi, Zhao, Huan, Yang, Zhenyuan, Shu, Peng, Chen, Junhao, Sun, Haobo, Liang, Ruixi, Li, Shixin, Shi, Pengcheng, Ma, Longjun, Liu, Zongjia, Liu, Zhengliang, Zhong, Tianyang, Zhang, Yutong, Ma, Chong, Zhang, Xin, Zhang, Tuo, Ding, Tianli, Ren, Yudan, Liu, Tianming, Jiang, Xi, Zhang, Shu
In this paper, we review legal testing methods based on Large Language Models (LLMs), using the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions. We compare current state-of-the-art LLMs, includ
Externí odkaz:
http://arxiv.org/abs/2411.10137
Autor:
Wang, Jiaqi, Ma, Rong
Let $p$ be an odd prime, Jianqiang Zhao has established a curious congruence $$ \sum_{i+j+k=p \atop i,j,k > 0} \frac{1}{ijk} \equiv -2B_{p-3}\pmod p , $$ where $B_{n}$ denotes the $n-$th Bernoulli numbers. In this paper, we will generalize this probl
Externí odkaz:
http://arxiv.org/abs/2411.03148
Autor:
Liu, Xiao, Qin, Bo, Liang, Dongzhu, Dong, Guang, Lai, Hanyu, Zhang, Hanchen, Zhao, Hanlin, Iong, Iat Long, Sun, Jiadai, Wang, Jiaqi, Gao, Junjie, Shan, Junjun, Liu, Kangning, Zhang, Shudan, Yao, Shuntian, Cheng, Siyi, Yao, Wentao, Zhao, Wenyi, Liu, Xinghan, Liu, Xinyi, Chen, Xinying, Yang, Xinyue, Yang, Yang, Xu, Yifan, Yang, Yu, Wang, Yujia, Xu, Yulin, Qi, Zehan, Dong, Yuxiao, Tang, Jie
We present AutoGLM, a new series in the ChatGLM family, designed to serve as foundation agents for autonomous control of digital devices through Graphical User Interfaces (GUIs). While foundation models excel at acquiring human knowledge, they often
Externí odkaz:
http://arxiv.org/abs/2411.00820
Autor:
Liu, Ziyu, Zang, Yuhang, Dong, Xiaoyi, Zhang, Pan, Cao, Yuhang, Duan, Haodong, He, Conghui, Xiong, Yuanjun, Lin, Dahua, Wang, Jiaqi
Visual preference alignment involves training Large Vision-Language Models (LVLMs) to predict human preferences between visual inputs. This is typically achieved by using labeled datasets of chosen/rejected pairs and employing optimization algorithms
Externí odkaz:
http://arxiv.org/abs/2410.17637
Autor:
Xing, Long, Huang, Qidong, Dong, Xiaoyi, Lu, Jiajie, Zhang, Pan, Zang, Yuhang, Cao, Yuhang, He, Conghui, Wang, Jiaqi, Wu, Feng, Lin, Dahua
In large vision-language models (LVLMs), images serve as inputs that carry a wealth of information. As the idiom "A picture is worth a thousand words" implies, representing a single image in current LVLMs can require hundreds or even thousands of tok
Externí odkaz:
http://arxiv.org/abs/2410.17247
Autor:
Ding, Shuangrui, Qian, Rui, Dong, Xiaoyi, Zhang, Pan, Zang, Yuhang, Cao, Yuhang, Guo, Yuwei, Lin, Dahua, Wang, Jiaqi
The Segment Anything Model 2 (SAM 2) has emerged as a powerful foundation model for object segmentation in both images and videos, paving the way for various downstream video applications. The crucial design of SAM 2 for video segmentation is its mem
Externí odkaz:
http://arxiv.org/abs/2410.16268
Recent advances in decoding language from brain signals (EEG and MEG) have been significantly driven by pre-trained language models, leading to remarkable progress on publicly available non-invasive EEG/MEG datasets. However, previous works predomina
Externí odkaz:
http://arxiv.org/abs/2410.14971
The electromagnetic and gravitational form factors of the nucleon are studied simultaneously using a covariant quark-diquark approach, and the pion cloud effect on the form factors is explicitly discussed. In this study, the electromagnetic form fact
Externí odkaz:
http://arxiv.org/abs/2410.14953
Dynamic vision sensor (DVS) is novel neuromorphic imaging device that generates asynchronous events. Despite the high temporal resolution and high dynamic range features, DVS is faced with background noise problem. Spatiotemporal filter is an effecti
Externí odkaz:
http://arxiv.org/abs/2410.12423
Autor:
Huang, Qidong, Dong, Xiaoyi, Zhang, Pan, Zang, Yuhang, Cao, Yuhang, Wang, Jiaqi, Lin, Dahua, Zhang, Weiming, Yu, Nenghai
We present the Modality Integration Rate (MIR), an effective, robust, and generalized metric to indicate the multi-modal pre-training quality of Large Vision Language Models (LVLMs). Large-scale pre-training plays a critical role in building capable
Externí odkaz:
http://arxiv.org/abs/2410.07167