Zobrazeno 1 - 10
of 57 824
pro vyhledávání: '"training strategies"'
Autor:
Răgman, Teodora, Stan, Adriana
This paper focuses on adapting the functionalities of the FastPitch model to the Romanian language; extending the set of speakers from one to eighteen; synthesising speech using an anonymous identity; and replicating the identities of new, unseen spe
Externí odkaz:
http://arxiv.org/abs/2410.06787
Autor:
Andrenšek, Luka, Koloski, Boshko, Pelicon, Andraž, Lavrač, Nada, Pollak, Senja, Purver, Matthew
We investigate zero-shot cross-lingual news sentiment detection, aiming to develop robust sentiment classifiers that can be deployed across multiple languages without target-language training data. We introduce novel evaluation datasets in several le
Externí odkaz:
http://arxiv.org/abs/2409.20054
The advancement of Large Language Models (LLMs) for domain applications in fields such as materials science and engineering depends on the development of fine-tuning strategies that adapt models for specialized, technical capabilities. In this work,
Externí odkaz:
http://arxiv.org/abs/2409.03444
This paper focuses on source-free domain adaptation for object detection in computer vision. This task is challenging and of great practical interest, due to the cost of obtaining annotated data sets for every new domain. Recent research has proposed
Externí odkaz:
http://arxiv.org/abs/2407.07586
This paper investigates the performance of the Contrastive Language-Image Pre-training (CLIP) when scaled down to limited computation budgets. We explore CLIP along three dimensions: data, architecture, and training strategies. With regards to data,
Externí odkaz:
http://arxiv.org/abs/2404.08197
The rapid advancement in Large Language Models has been met with significant challenges in their training processes, primarily due to their considerable computational and memory demands. This research examines parallelization techniques developed to
Externí odkaz:
http://arxiv.org/abs/2405.15628
Compact neural networks are specially designed for applications on edge devices with faster inference speed yet modest performance. However, training strategies of compact models are borrowed from that of conventional models at present, which ignores
Externí odkaz:
http://arxiv.org/abs/2404.11202
Cancer is one of the leading causes of death globally, and early diagnosis is crucial for patient survival. Deep learning algorithms have great potential for automatic cancer analysis. Artificial intelligence has achieved high performance in recogniz
Externí odkaz:
http://arxiv.org/abs/2404.09761
Autor:
Vergallo, Roberto1 (AUTHOR) roberto.vergallo@unisalento.it, Mainetti, Luca1 (AUTHOR)
Publikováno v:
Future Internet. Sep2024, Vol. 16 Issue 9, p334. 25p.
Autor:
Hu, Shengding, Tu, Yuge, Han, Xu, He, Chaoqun, Cui, Ganqu, Long, Xiang, Zheng, Zhi, Fang, Yewei, Huang, Yuxiang, Zhao, Weilin, Zhang, Xinrong, Thai, Zheng Leng, Zhang, Kaihuo, Wang, Chongyi, Yao, Yuan, Zhao, Chenyang, Zhou, Jie, Cai, Jie, Zhai, Zhongwu, Ding, Ning, Jia, Chao, Zeng, Guoyang, Li, Dahai, Liu, Zhiyuan, Sun, Maosong
The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This scenario un
Externí odkaz:
http://arxiv.org/abs/2404.06395