Zobrazeno 1 - 10
of 6 113
pro vyhledávání: '"Task transfer"'
Autor:
Lee, Chanhui, Jeong, Dae-Woong, Ko, Sung Moon, Lee, Sumin, Kim, Hyunseung, Yim, Soorin, Han, Sehui, Kim, Sungwoong, Lim, Sungbin
Publikováno v:
ICML2024-AI4Science Poster
Molecules have a number of distinct properties whose importance and application vary. Often, in reality, labels for some properties are hard to achieve despite their practical importance. A common solution to such data scarcity is to use models of go
Externí odkaz:
http://arxiv.org/abs/2410.00432
There have been several recent works proposed to utilize model-based optimization methods to improve the productivity of using high-level synthesis (HLS) to design domain-specific architectures. They would replace the time-consuming performance estim
Externí odkaz:
http://arxiv.org/abs/2408.13270
Autor:
Li, Bo, Zhang, Yuanhan, Guo, Dong, Zhang, Renrui, Li, Feng, Zhang, Hao, Zhang, Kaichen, Zhang, Peiyuan, Li, Yanwei, Liu, Ziwei, Li, Chunyuan
We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series. Our experimental results demonstrate that LLaVA-OneVision
Externí odkaz:
http://arxiv.org/abs/2408.03326
Exploring the Effectiveness and Consistency of Task Selection in Intermediate-Task Transfer Learning
Identifying beneficial tasks to transfer from is a critical step toward successful intermediate-task transfer learning. In this work, we experiment with 130 source-target task combinations and demonstrate that the transfer performance exhibits severe
Externí odkaz:
http://arxiv.org/abs/2407.16245
Despite the widespread adoption of multi-task training in deep learning, little is understood about how multi-task learning (MTL) affects generalization. Prior work has conjectured that the negative effects of MTL are due to optimization challenges t
Externí odkaz:
http://arxiv.org/abs/2408.14677
Open-ended worlds are those in which there are no pre-specified goals or environmental reward signal. As a consequence, an agent must know how to perform a multitude of tasks. However, when a new task is presented to an agent, we expect it to be able
Externí odkaz:
http://arxiv.org/abs/2405.06059
Autor:
Bo, Chunxue1 (AUTHOR), Liu, Shuzhi1 (AUTHOR) shuzhiliu@qlnu.edu.cn, Liu, Yuyue1 (AUTHOR), Guo, Zhishuo1 (AUTHOR), Wang, Jinghan1 (AUTHOR), Xu, Jinghai1 (AUTHOR)
Publikováno v:
Sensors (14248220). Jul2024, Vol. 24 Issue 14, p4741. 23p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Prompt tuning, in which prompts are optimized to adapt large-scale pre-trained language models to downstream tasks instead of fine-tuning the full model parameters, has been shown to be particularly effective when the prompts are trained in a multi-t
Externí odkaz:
http://arxiv.org/abs/2402.08594
Background: For an individualized support of patients during rehabilitation, learning of individual machine learning models from the human electroencephalogram (EEG) is required. Our approach allows labeled training data to be recorded without the ne
Externí odkaz:
http://arxiv.org/abs/2402.17790