Zobrazeno 1 - 10
of 1 613
pro vyhledávání: '"A. Ranaldi"'
The cognitive essence of humans is deeply intertwined with the concept of animacy, which plays an essential role in shaping their memory, vision, and multi-layered language understanding. Although animacy appears in language via nuanced constraints o
Externí odkaz:
http://arxiv.org/abs/2408.06332
Autor:
Ranaldi, Leonardo, Freitas, Andrè
The alignments of reasoning abilities between smaller and larger Language Models are largely conducted via Supervised Fine-Tuning (SFT) using demonstrations generated from robust Large Language Models (LLMs). Although these approaches deliver more pe
Externí odkaz:
http://arxiv.org/abs/2405.00402
Autor:
Ranaldi, Federico, Ruzzetti, Elena Sofia, Onorati, Dario, Ranaldi, Leonardo, Giannone, Cristina, Favalli, Andrea, Romagnoli, Raniero, Zanzotto, Fabio Massimo
Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced
Externí odkaz:
http://arxiv.org/abs/2402.08100
Autor:
Ranaldi, Leonardo, Pucci, Giulia, Ranaldi, Federico, Ruzzetti, Elena Sofia, Zanzotto, Fabio Massimo
Publikováno v:
2024.findings-naacl.78
Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner. Although they are achieving significant s
Externí odkaz:
http://arxiv.org/abs/2311.08097
Autor:
Ranaldi, Leonardo, Pucci, Giulia
Large Language Models have been demonstrating the ability to solve complex tasks by delivering answers that are positively evaluated by humans due in part to the intensive use of human feedback that refines responses. However, the suggestibility tran
Externí odkaz:
http://arxiv.org/abs/2311.09410
Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages
Autor:
Ruzzetti, Elena Sofia, Ranaldi, Federico, Logozzo, Felicia, Mastromattei, Michele, Ranaldi, Leonardo, Zanzotto, Fabio Massimo
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2023, Association for Computational Linguistics, 2023, pages 14447 - 14461
The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language. In this paper, we propose a novel standpoint to investigate the above issue: using typological si
Externí odkaz:
http://arxiv.org/abs/2305.02215
Instruction-tuned Large Language Models (It-LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effec
Externí odkaz:
http://arxiv.org/abs/2309.12481
The language ability of Large Language Models (LLMs) is often unbalanced towards English because of the imbalance in the distribution of the pre-training data. This disparity is demanded in further fine-tuning and affecting the cross-lingual abilitie
Externí odkaz:
http://arxiv.org/abs/2308.14186
Autor:
Ranaldi, Leonardo, Ruzzetti, Elena Sofia, Venditti, Davide, Onorati, Dario, Zanzotto, Fabio Massimo
Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (V
Externí odkaz:
http://arxiv.org/abs/2305.13862
Publikováno v:
2023.ranlp-1.103
Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance
Externí odkaz:
http://arxiv.org/abs/2305.04673