Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Ruzzetti, Elena Sofia"'
Autor:
Miranda, Michele, Ruzzetti, Elena Sofia, Santilli, Andrea, Zanzotto, Fabio Massimo, Bratières, Sébastien, Rodolà, Emanuele
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains. However, their reliance on massive internet-sourced datasets for training brings notable privacy issues, which a
Externí odkaz:
http://arxiv.org/abs/2408.05212
Autor:
Venditti, Davide, Ruzzetti, Elena Sofia, Xompero, Giancarlo A., Giannone, Cristina, Favalli, Andrea, Romagnoli, Raniero, Zanzotto, Fabio Massimo
Large language models (LLMs) require a significant redesign in solutions to preserve privacy in data-intensive applications due to their text-generation capabilities. Indeed, LLMs tend to memorize and emit private information when maliciously prompte
Externí odkaz:
http://arxiv.org/abs/2406.18221
Autor:
Ranaldi, Federico, Ruzzetti, Elena Sofia, Onorati, Dario, Ranaldi, Leonardo, Giannone, Cristina, Favalli, Andrea, Romagnoli, Raniero, Zanzotto, Fabio Massimo
Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced
Externí odkaz:
http://arxiv.org/abs/2402.08100
Autor:
Ranaldi, Leonardo, Pucci, Giulia, Ranaldi, Federico, Ruzzetti, Elena Sofia, Zanzotto, Fabio Massimo
Publikováno v:
2024.findings-naacl.78
Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner. Although they are achieving significant s
Externí odkaz:
http://arxiv.org/abs/2311.08097
Autor:
Ranaldi, Leonardo, Ruzzetti, Elena Sofia, Venditti, Davide, Onorati, Dario, Zanzotto, Fabio Massimo
Publikováno v:
2024.starsem-1.30
Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (V
Externí odkaz:
http://arxiv.org/abs/2305.13862
Publikováno v:
2023.ranlp-1.103
Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance
Externí odkaz:
http://arxiv.org/abs/2305.04673
Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages
Autor:
Ruzzetti, Elena Sofia, Ranaldi, Federico, Logozzo, Felicia, Mastromattei, Michele, Ranaldi, Leonardo, Zanzotto, Fabio Massimo
Publikováno v:
Findings of the Association for Computational Linguistics: EMNLP 2023, Association for Computational Linguistics, 2023, pages 14447 - 14461
The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language. In this paper, we propose a novel standpoint to investigate the above issue: using typological si
Externí odkaz:
http://arxiv.org/abs/2305.02215
Autor:
Ranaldi, Leonardo, Nourbakhsh, Aria, Patrizi, Arianna, Ruzzetti, Elena Sofia, Onorati, Dario, Fallucchi, Francesca, Zanzotto, Fabio Massimo
Publikováno v:
2023.ranlp-1.102
Pre-trained Transformers are challenging human performances in many NLP tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained Natural Language Und
Externí odkaz:
http://arxiv.org/abs/2201.05613
Autor:
Ruzzetti, Elena Sofia, Ranaldi, Leonardo, Mastromattei, Michele, Fallucchi, Francesca, Zanzotto, Fabio Massimo
Publikováno v:
Findings of the Association for Computational Linguistics: ACL 2022
Word embeddings are powerful dictionaries, which may easily capture language variations. However, these dictionaries fail to give sense to rare words, which are surprisingly often covered by traditional dictionaries. In this paper, we propose to use
Externí odkaz:
http://arxiv.org/abs/2109.11763
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.