Zobrazeno 1 - 10
of 212
pro vyhledávání: '"P. Bielikova"'
Autor:
Srba, Ivan, Razuvayevskaya, Olesya, Leite, João A., Moro, Robert, Schlicht, Ipek Baris, Tonelli, Sara, García, Francisco Moreno, Lottmann, Santiago Barrio, Teyssou, Denis, Porcellini, Valentin, Scarton, Carolina, Bontcheva, Kalina, Bielikova, Maria
In the current era of social media and generative AI, an ability to automatically assess the credibility of online social media content is of tremendous importance. Credibility assessment is fundamentally based on aggregating credibility signals, whi
Externí odkaz:
http://arxiv.org/abs/2410.21360
Autor:
Cegin, Jan, Pecher, Branislav, Simko, Jakub, Srba, Ivan, Bielikova, Maria, Brusilovsky, Peter
The generative large language models (LLMs) are increasingly used for data augmentation tasks, where text samples are paraphrased (or generated anew) and then used for classifier fine-tuning. Existing works on augmentation leverage the few-shot scena
Externí odkaz:
http://arxiv.org/abs/2410.10756
Prompt tuning is an efficient solution for training large language models (LLMs). However, current soft-prompt-based methods often sacrifice multi-task modularity, requiring the training process to be fully or partially repeated for each newly added
Externí odkaz:
http://arxiv.org/abs/2408.01119
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2024
While fine-tuning of pre-trained language models generally helps to overcome the lack of labelled training samples, it also displays model performance instability. This instability mainly originates from randomness in initialisation or data shuffling
Externí odkaz:
http://arxiv.org/abs/2406.12471
Publikováno v:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
While learning with limited labelled data can improve performance when the labels are lacking, it is also sensitive to the effects of uncontrolled randomness introduced by so-called randomness factors (e.g., varying order of data). We propose a metho
Externí odkaz:
http://arxiv.org/abs/2402.12817
When solving NLP tasks with limited labelled data, researchers can either use a general large language model without further update, or use a small number of labelled examples to tune a specialised smaller model. In this work, we address the research
Externí odkaz:
http://arxiv.org/abs/2402.12819
In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success. Although a large number of sample selection strategies exi
Externí odkaz:
http://arxiv.org/abs/2402.03038
Autor:
Macko, Dominik, Moro, Robert, Uchendu, Adaku, Srba, Ivan, Lucas, Jason Samuel, Yamashita, Michiharu, Tripto, Nafis Irtiza, Lee, Dongwon, Simko, Jakub, Bielikova, Maria
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2024
High-quality text generation capability of recent Large Language Models (LLMs) causes concerns about their misuse (e.g., in massive generation/spread of disinformation). Machine-generated text (MGT) detection is important to cope with such threats. H
Externí odkaz:
http://arxiv.org/abs/2401.07867
Autor:
Cegin, Jan, Pecher, Branislav, Simko, Jakub, Srba, Ivan, Bielikova, Maria, Brusilovsky, Peter
Publikováno v:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024
The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to asse
Externí odkaz:
http://arxiv.org/abs/2401.06643
Publikováno v:
ACM Computing Surveys, Volume 57, Issue 1, 2024
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-learning or few-shot learning, aims to effectively train a model using only a small amount of labelled samples. However, these approaches have been observe
Externí odkaz:
http://arxiv.org/abs/2312.01082