Zobrazeno 1 - 10
of 1 118
pro vyhledávání: '"Nozza P"'
Autor:
Yu, Zehui, Sen, Indira, Assenmacher, Dennis, Samory, Mattia, Fröhling, Leon, Dahn, Christina, Nozza, Debora, Wagner, Claudia
Machine learning (ML)-based content moderation tools are essential to keep online spaces free from hateful communication. Yet, ML tools can only be as capable as the quality of the data they are trained on allows them. While there is increasing evide
Externí odkaz:
http://arxiv.org/abs/2405.08562
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing. This paper proposes FairBelief, a
Externí odkaz:
http://arxiv.org/abs/2402.17389
Recent instruction fine-tuned models can solve multiple NLP tasks when prompted to do so, with machine translation (MT) being a prominent use case. However, current research often focuses on standard performance benchmarks, leaving compelling fairnes
Externí odkaz:
http://arxiv.org/abs/2310.12127
Recent computational approaches for combating online hate speech involve the automatic generation of counter narratives by adapting Pretrained Transformer-based Language Models (PLMs) with human-curated data. This process, however, can produce in-dom
Externí odkaz:
http://arxiv.org/abs/2309.02311
Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between
Externí odkaz:
http://arxiv.org/abs/2307.12973
As 3rd-person pronoun usage shifts to include novel forms, e.g., neopronouns, we need more research on identity-inclusive NLP. Exclusion is particularly harmful in one of the most popular NLP applications, machine translation (MT). Wrong pronoun tran
Externí odkaz:
http://arxiv.org/abs/2305.16051
Autor:
Touileb, Samia, Nozza, Debora
Scandinavian countries are perceived as role-models when it comes to gender equality. With the advent of pre-trained language models and their widespread usage, we investigate to what extent gender-based harmful and toxic content exist in selected Sc
Externí odkaz:
http://arxiv.org/abs/2211.11678
Autor:
Bianchi, Federico, Kalluri, Pratyusha, Durmus, Esin, Ladhak, Faisal, Cheng, Myra, Nozza, Debora, Hashimoto, Tatsunori, Jurafsky, Dan, Zou, James, Caliskan, Aylin
Machine learning models that convert user-written text descriptions into images are now widely available online and used by millions of users to generate millions of images a day. We investigate the potential for these models to amplify dangerous and
Externí odkaz:
http://arxiv.org/abs/2211.03759
Autor:
Valentina Mazzotta, Silvia Nozza, Simone Lanini, Davide Moschese, Alessandro Tavelli, Roberto Rossotti, Francesco Maria Fusco, Lorenzo Biasioli, Giulia Matusali, Angelo Roberto Raccagni, Davide Mileto, Chiara Maci, Giuseppe Lapadula, Antonio Di Biagio, Luca Pipitò, Enrica Tamburrini, Antonella d’Arminio Monforte, Antonella Castagna, Andrea Antinori, Spinello Antinori, Chiara Baiguera, Gianmaria Baldin, Matteo Bassetti, Paolo Bonfanti, Giorgia Brucci, Elena Bruzzesi, Caterina Candela, Antonio Cascio, Antonella d'Arminio Monforte, Andrea Delama, Gabriella D'Ettorre, Damiano Farinacci, Maria Rita Gismondo, Andrea Gori, Massimiliano Lanzafame, Miriam Lichtner, Giulia Mancarella, Alessandro Mancon, Giulia Marchetti, Emanuele Nicastri, Alessandro Pandolfo, Francesca Panzo, Stefania Piconi, Carmela Pinnetti, Alessandro Raimondi, Marco Ridolfi, Giuliano Rizzardini, Alessandra Rodanò, Margherita Sambo, Vincenzo Sangiovanni, Nadia Sangiovanni, Daniele Tesoro, Serena Vita
Publikováno v:
EBioMedicine, Vol 107, Iss , Pp 105289- (2024)
Summary: Background: Severe and prolonged mpox courses have been described during the 2022–2023 outbreak. Identifying predictors of severe evolution is crucial for improving management and therapeutic strategies. We explored the predictors of mpox
Externí odkaz:
https://doaj.org/article/ee0528c861034ad8aa54758b34334546
Hate speech is a global phenomenon, but most hate speech datasets so far focus on English-language content. This hinders the development of more effective hate speech detection models in hundreds of languages spoken by billions across the world. More
Externí odkaz:
http://arxiv.org/abs/2210.11359