Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Bean, Andrew M."'
Training Large Language Models (LLMs) incurs substantial data-related costs, motivating the development of data-efficient training methods through optimised data ordering and selection. Human-inspired learning strategies, such as curriculum learning,
Externí odkaz:
http://arxiv.org/abs/2408.07888
Autor:
Bean, Andrew M., Hellsten, Simi, Mayne, Harry, Magomere, Jabez, Chi, Ethan A., Chi, Ryan, Hale, Scott A., Kirk, Hannah Rose
In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using challenging Linguistic Olympiad puzzles, we evaluate (i) capabilities for in-context identification and generalisation
Externí odkaz:
http://arxiv.org/abs/2406.06196
Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especially for highly subjective human pre
Externí odkaz:
http://arxiv.org/abs/2310.07629
With the rapid development of new large language models (LLMs), each claiming to surpass previous models, an overall picture of medical LLM research can be elusive. To address this challenge, we benchmark a range of top LLMs and identify consistent p
Externí odkaz:
http://arxiv.org/abs/2310.07225
Large Language Models (LLMs), now used daily by millions, can encode societal biases, exposing their users to representational harms. A large body of scholarship on LLM bias exists but it predominantly adopts a Western-centric frame and attends compa
Externí odkaz:
http://arxiv.org/abs/2309.08573
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.