Zobrazeno 1 - 10
of 154
pro vyhledávání: '"Rosset, Sophie"'
Publikováno v:
LREC-COLING 2024, May 2024, TURIN, Italy
This study is part of the debate on the efficiency of large versus small language models for text classification by prompting.We assess the performance of small language models in zero-shot text classification, challenging the prevailing dominance of
Externí odkaz:
http://arxiv.org/abs/2404.11122
Publikováno v:
The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 2024, Torino, Italy
Intent classification and slot-filling are essential tasks of Spoken Language Understanding (SLU). In most SLUsystems, those tasks are realized by independent modules. For about fifteen years, models achieving both of themjointly and exploiting their
Externí odkaz:
http://arxiv.org/abs/2403.19727
Within the current trend of Pretained Language Models (PLM), emerge more and more criticisms about the ethical andecological impact of such models. In this article, considering these critical remarks, we propose to focus on smallermodels, such as com
Externí odkaz:
http://arxiv.org/abs/2403.18338
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven to be effective for limited domain and language applications when a sufficient number of training examples are available. In practice, these approache
Externí odkaz:
http://arxiv.org/abs/2207.09157
In the last five years, the rise of the self-attentional Transformer-based architectures led to state-of-the-art performances over many natural language tasks. Although these approaches are increasingly popular, they require large amounts of data and
Externí odkaz:
http://arxiv.org/abs/2207.09152
For many tasks, state-of-the-art results have been achieved with Transformer-based architectures, resulting in a paradigmatic shift in practices from the use of task-specific architectures to the fine-tuning of pre-trained language models. The ongoin
Externí odkaz:
http://arxiv.org/abs/2207.09150
We propose to address online speaker diarization as a combination of incremental clustering and local diarization applied to a rolling buffer updated every 500ms. Every single step of the proposed pipeline is designed to take full advantage of the st
Externí odkaz:
http://arxiv.org/abs/2109.06483
On-the-job learning consists in continuously learning while being used in production, in an open environment, meaning that the system has to deal on its own with situations and elements never seen before. The kind of systems that seem to be especiall
Externí odkaz:
http://arxiv.org/abs/2102.13589
This paper describes the participation of LIMSI UPV team in SemEval-2020 Task 9: Sentiment Analysis for Code-Mixed Social Media Text. The proposed approach competed in SentiMix Hindi-English subtask, that addresses the problem of predicting the senti
Externí odkaz:
http://arxiv.org/abs/2008.13173
Despite the growing popularity of metric learning approaches, very little work has attempted to perform a fair comparison of these techniques for speaker verification. We try to fill this gap and compare several metric learning loss functions in a sy
Externí odkaz:
http://arxiv.org/abs/2003.14021