Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Shaib, Chantal"'
Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models, beyond frequent n-grams. Specifically, we define synta
Externí odkaz:
http://arxiv.org/abs/2407.00211
The diversity across outputs generated by large language models shapes the perception of their quality and utility. Prompt leaks, templated answer structure, and canned responses across different interactions are readily noticed by people, but there
Externí odkaz:
http://arxiv.org/abs/2403.00553
Modern instruction-tuned models have become highly capable in text generation tasks such as summarization, and are expected to be released at a steady pace. In practice one may now wish to choose confidently, but with minimal effort, the best perform
Externí odkaz:
http://arxiv.org/abs/2402.18756
Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LL
Externí odkaz:
http://arxiv.org/abs/2306.11270
Autor:
Shaib, Chantal, Li, Millicent L., Joseph, Sebastian, Marshall, Iain J., Li, Junyi Jessy, Wallace, Byron C.
Large language models, particularly GPT-3, are able to produce high quality summaries of general domain news articles in few- and zero-shot settings. However, it is unclear if such models are similarly capable in more specialized, high-stakes domains
Externí odkaz:
http://arxiv.org/abs/2305.06299