Zobrazeno 1 - 10
of 2 353
pro vyhledávání: '"Assenmacher AT"'
Autor:
Ma, Bolei, Yoztyurk, Berk, Haensch, Anna-Carolina, Wang, Xinpeng, Herklotz, Markus, Kreuter, Frauke, Plank, Barbara, Assenmacher, Matthias
In recent research, large language models (LLMs) have been increasingly used to investigate public opinions. This study investigates the algorithmic fidelity of LLMs, i.e., the ability to replicate the socio-cultural context and nuanced opinions of h
Externí odkaz:
http://arxiv.org/abs/2412.13169
Autor:
Arias, Esteban Garces, Blocher, Hannah, Rodemann, Julian, Li, Meimingwei, Heumann, Christian, Aßenmacher, Matthias
Open-ended text generation has become a prominent task in natural language processing due to the rise of powerful (large) language models. However, evaluating the quality of these models and the employed decoding strategies remains challenging becaus
Externí odkaz:
http://arxiv.org/abs/2410.18653
We present a novel approach for enhancing diversity and control in data annotation tasks by personalizing large language models (LLMs). We investigate the impact of injecting diverse persona descriptions into LLM prompts across two studies, exploring
Externí odkaz:
http://arxiv.org/abs/2410.11745
Decoding strategies for generative large language models (LLMs) are a critical but often underexplored aspect of text generation tasks. Guided by specific hyperparameters, these strategies aim to transform the raw probability distributions produced b
Externí odkaz:
http://arxiv.org/abs/2410.06097
Autor:
Wuttke, Alexander, Aßenmacher, Matthias, Klamm, Christopher, Lang, Max M., Würschinger, Quirin, Kreuter, Frauke
Traditional methods for eliciting people's opinions face a trade-off between depth and scale: structured surveys enable large-scale data collection but limit respondents' ability to express unanticipated thoughts in their own words, while conversatio
Externí odkaz:
http://arxiv.org/abs/2410.01824
To reduce the need for human annotations, large language models (LLMs) have been proposed as judges of the quality of other candidate models. LLM judges are typically evaluated by measuring the correlation with human judgments on generation tasks suc
Externí odkaz:
http://arxiv.org/abs/2409.04168
In recent years, large language models (LLMs) have emerged as powerful tools with potential applications in various fields, including software engineering. Within the scope of this research, we evaluate five different state-of-the-art LLMs - Bard, Bi
Externí odkaz:
http://arxiv.org/abs/2409.04164
Autor:
Arias, Esteban Garces, Rodemann, Julian, Li, Meimingwei, Heumann, Christian, Aßenmacher, Matthias
Decoding from the output distributions of large language models to produce high-quality text is a complex challenge in language modeling. Various approaches, such as beam search, sampling with temperature, $k-$sampling, nucleus $p-$sampling, typical
Externí odkaz:
http://arxiv.org/abs/2407.18698
There is an increase in the proliferation of online hate commensurate with the rise in the usage of social media. In response, there is also a significant advancement in the creation of automated tools aimed at identifying harmful text content using
Externí odkaz:
http://arxiv.org/abs/2406.04892
Autor:
Yu, Zehui, Sen, Indira, Assenmacher, Dennis, Samory, Mattia, Fröhling, Leon, Dahn, Christina, Nozza, Debora, Wagner, Claudia
Machine learning (ML)-based content moderation tools are essential to keep online spaces free from hateful communication. Yet, ML tools can only be as capable as the quality of the data they are trained on allows them. While there is increasing evide
Externí odkaz:
http://arxiv.org/abs/2405.08562