Zobrazeno 1 - 10
of 403
pro vyhledávání: '"Yuan, Jianbo"'
Autor:
Zhang, Yiming, He, Baoyi, Zhang, Shengyu, Fu, Yuhao, Zhou, Qi, Sang, Zhijie, Hong, Zijin, Yang, Kejing, Wang, Wenjun, Yuan, Jianbo, Ning, Guanghan, Li, Linyi, Ji, Chunlin, Wu, Fei, Yang, Hongxia
Recent advancements in building domain-specific large language models (LLMs) have shown remarkable success, especially in tasks requiring reasoning abilities like logical inference over complex relationships and multi-step problem solving. However, c
Externí odkaz:
http://arxiv.org/abs/2410.13699
Autor:
Wang, Xuwu, Cui, Qiwen, Tao, Yunzhe, Wang, Yiran, Chai, Ziwei, Han, Xiaotian, Liu, Boyi, Yuan, Jianbo, Su, Jing, Wang, Guoyin, Liu, Tingkai, Chen, Liyu, Liu, Tianyi, Sun, Tao, Zhang, Yufeng, Zheng, Sirui, You, Quanzeng, Yang, Yang, Yang, Hongxia
Large language models (LLMs) have become increasingly pivotal across various domains, especially in handling complex data types. This includes structured data processing, as exemplified by ChartQA and ChatGPT-Ada, and multimodal unstructured data pro
Externí odkaz:
http://arxiv.org/abs/2410.00773
We present the "Law of Vision Representation" in multimodal large language models (MLLMs). It reveals a strong correlation between the combination of cross-modal alignment, correspondence in vision representation, and MLLM performance. We quantify th
Externí odkaz:
http://arxiv.org/abs/2408.16357
Autor:
Chai, Ziwei, Wang, Guoyin, Su, Jing, Zhang, Tianjie, Huang, Xuanwen, Wang, Xuwu, Xu, Jingjing, Yuan, Jianbo, Yang, Hongxia, Wu, Fei, Yang, Yang
We present Expert-Token-Routing, a unified generalist framework that facilitates seamless integration of multiple expert LLMs. Our framework represents expert LLMs as special expert tokens within the vocabulary of a meta LLM. The meta LLM can route t
Externí odkaz:
http://arxiv.org/abs/2403.16854
Publikováno v:
Journal of Medical Internet Research, Vol 22, Iss 6, p e17280 (2020)
BackgroundThe number of electronic cigarette (e-cigarette) users has been increasing rapidly in recent years, especially among youth and young adults. More e-cigarette products have become available, including e-liquids with various brands and flavor
Externí odkaz:
https://doaj.org/article/07562aa2b7394535a7492cdb1ae4eee1
Publikováno v:
Journal of Medical Internet Research, Vol 22, Iss 6, p e17496 (2020)
BackgroundIn recent years, flavored electronic cigarettes (e-cigarettes) have become popular among teenagers and young adults. Discussions about e-cigarettes and e-cigarette use (vaping) experiences are prevalent online, making social media an ideal
Externí odkaz:
https://doaj.org/article/78bfd7bda0f0495f92f0cfb6a10976a7
Autor:
Zhang, Shenao, Zheng, Sirui, Ke, Shuqi, Liu, Zhihan, Jin, Wanxin, Yuan, Jianbo, Yang, Yingxiang, Yang, Hongxia, Wang, Zhaoran
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback. However, RL algorithms may require extensive trial-and-error interactions to collect usef
Externí odkaz:
http://arxiv.org/abs/2402.16181
Autor:
Hu, Xueyu, Zhao, Ziyu, Wei, Shuang, Chai, Ziwei, Ma, Qianli, Wang, Guoyin, Wang, Xuwu, Su, Jing, Xu, Jingjing, Zhu, Ming, Cheng, Yao, Yuan, Jianbo, Li, Jiwei, Kuang, Kun, Yang, Yang, Yang, Hongxia, Wu, Fei
In this paper, we introduce InfiAgent-DABench, the first benchmark specifically designed to evaluate LLM-based agents on data analysis tasks. These tasks require agents to end-to-end solving complex tasks by interacting with an execution environment.
Externí odkaz:
http://arxiv.org/abs/2401.05507
Autor:
Chen, Tianqi, Liu, Yongfei, Wang, Zhendong, Yuan, Jianbo, You, Quanzeng, Yang, Hongxia, Zhou, Mingyuan
In light of the remarkable success of in-context learning in large language models, its potential extension to the vision domain, particularly with visual foundation models like Stable Diffusion, has sparked considerable interest. Existing approaches
Externí odkaz:
http://arxiv.org/abs/2312.01408
This work introduces self-infilling code generation, a general framework that incorporates infilling operations into auto-regressive decoding. Our approach capitalizes on the observation that recent infilling-capable code language models can self-inf
Externí odkaz:
http://arxiv.org/abs/2311.17972