Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Hauzenberger, Lukas"'
Autor:
Paischer, Fabian, Hauzenberger, Lukas, Schmied, Thomas, Alkin, Benedikt, Deisenroth, Marc Peter, Hochreiter, Sepp
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned on a downstream task for a specific application. The most successful and most commonly used fine-tuning method is to update the pre-trained weights via a low-rank ada
Externí odkaz:
http://arxiv.org/abs/2410.07170
Societal biases are reflected in large pre-trained language models and their fine-tuned versions on downstream tasks. Common in-processing bias mitigation approaches, such as adversarial training and mutual information removal, introduce additional o
Externí odkaz:
http://arxiv.org/abs/2205.15171
Autor:
Hauzenberger, Lukas
In recent years, large language models have achieved state of the art performance on a wide variety of Natural Language Processing tasks. These capabilities however come with some negative consequences, namely the existence of various societal biases
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______3361::3244132a4ab2b4c23989132f376329d0