Zobrazeno 1 - 10
of 1 140
pro vyhledávání: '"Wang, Guoyin"'
In actual scenarios, whether manually or automatically annotated, label noise is inevitably generated in the training data, which can affect the effectiveness of deep CNN models. The popular solutions require data cleaning or designing additional opt
Externí odkaz:
http://arxiv.org/abs/2409.03254
Autor:
Dai, Dawei, Zhang, Yuanhui, Xu, Long, Yang, Qianlan, Shen, Xiaojing, Xia, Shuyin, Wang, Guoyin
The previous advancements in pathology image understanding primarily involved developing models tailored to specific tasks. Recent studies has demonstrated that the large vision-language model can enhance the performance of various downstream tasks i
Externí odkaz:
http://arxiv.org/abs/2408.09530
Current evaluations of large language models (LLMs) often overlook non-determinism, typically focusing on a single output per example. This limits our understanding of LLM performance variability in real-world applications. Our study addresses this i
Externí odkaz:
http://arxiv.org/abs/2407.10457
Currently, image-text-driven multi-modal deep learning models have demonstrated their outstanding potential in many fields. In practice, tasks centered around facial images have broad application prospects. This paper presents \textbf{FaceCaption-15M
Externí odkaz:
http://arxiv.org/abs/2407.08515
Autor:
Yang, Jie, Xiaodiao, Lingyun, Wang, Guoyin, Pedrycz, Witold, Xia, Shuyin, Zhang, Qinghua, Wu, Di
The granular-ball (GB)-based classifier introduced by Xia, exhibits adaptability in creating coarse-grained information granules for input, thereby enhancing its generality and flexibility. Nevertheless, the current GB-based classifiers rigidly assig
Externí odkaz:
http://arxiv.org/abs/2407.11027
Group decision-making (GDM) characterized by complexity and uncertainty is an essential part of various life scenarios. Most existing researches lack tools to fuse information quickly and interpret decision results for partially formed decisions. Thi
Externí odkaz:
http://arxiv.org/abs/2406.18884
Autor:
Zhao, Ziyu, Gan, Leilei, Wang, Guoyin, Hu, Yuwei, Shen, Tao, Yang, Hongxia, Kuang, Kun, Wu, Fei
Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs). Its modular and plug-and-play nature allows the integration of various domain-specific LoRAs, enhancing LLM capabilities. Open-source platforms like Hugging
Externí odkaz:
http://arxiv.org/abs/2406.16989
Autor:
Chai, Ziwei, Wang, Guoyin, Su, Jing, Zhang, Tianjie, Huang, Xuanwen, Wang, Xuwu, Xu, Jingjing, Yuan, Jianbo, Yang, Hongxia, Wu, Fei, Yang, Yang
We present Expert-Token-Routing, a unified generalist framework that facilitates seamless integration of multiple expert LLMs. Our framework represents expert LLMs as special expert tokens within the vocabulary of a meta LLM. The meta LLM can route t
Externí odkaz:
http://arxiv.org/abs/2403.16854
This paper presents a novel framework for continual feature selection (CFS) in data preprocessing, particularly in the context of an open and dynamic environment where unknown classes may emerge. CFS encounters two primary challenges: the discovery o
Externí odkaz:
http://arxiv.org/abs/2403.10253
Autor:
Zhao, Haiteng, Ma, Chang, Wang, Guoyin, Su, Jing, Kong, Lingpeng, Xu, Jingjing, Deng, Zhi-Hong, Yang, Hongxia
Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior. In this work, we argue that the capacity to learn new actions fro
Externí odkaz:
http://arxiv.org/abs/2402.15809