Zobrazeno 1 - 10
of 268
pro vyhledávání: '"Roberts, Jesse"'
This study explores the potential of large language models (LLMs) for identifying and examining intertextual relationships within biblical, Koine Greek texts. By evaluating the performance of LLMs on various intertextuality scenarios the study demons
Externí odkaz:
http://arxiv.org/abs/2409.01882
Language models are known to absorb biases from their training data, leading to predictions driven by statistical regularities rather than semantic relevance. We investigate the impact of these biases on answer choice preferences in the Massive Multi
Externí odkaz:
http://arxiv.org/abs/2408.08651
The internet offers tremendous access to services, social connections, and needed products. However, to those without sufficient experience, engaging with businesses and friends across the internet can be daunting due to the ever present danger of sc
Externí odkaz:
http://arxiv.org/abs/2407.15695
This paper evaluates whether large language models (LLMs) exhibit cognitive fan effects, similar to those discovered by Anderson in humans, after being pre-trained on human textual data. We conduct two sets of in-context recall experiments designed t
Externí odkaz:
http://arxiv.org/abs/2407.06349
Cloze testing is a common method for measuring the behavior of large language models on a number of benchmark tasks. Using the MMLU dataset, we show that the base-rate probability (BRP) differences across answer tokens are significant and affect task
Externí odkaz:
http://arxiv.org/abs/2406.11634
In this paper, we evaluate whether LLMs learn to make human-like preference judgements in strategic scenarios as compared with known empirical results. Solar and Mistral are shown to exhibit stable value-based preference consistent with humans and ex
Externí odkaz:
http://arxiv.org/abs/2404.08710
Autor:
Roberts, Jesse
In this paper, we bridge work in rock climbing route generation and grading into the computational creativity community. We provide the necessary background to situate that literature and demonstrate the domain's intellectual merit in the computation
Externí odkaz:
http://arxiv.org/abs/2311.02211
The recent proliferation of research into transformer based natural language processing has led to a number of studies which attempt to detect the presence of human-like cognitive behavior in the models. We contend that, as is true of human psycholog
Externí odkaz:
http://arxiv.org/abs/2308.08032
Autor:
Roberts, Jesse
In this article we prove that the general transformer neural model undergirding modern large language models (LLMs) is Turing complete under reasonable assumptions. This is the first work to directly address the Turing completeness of the underlying
Externí odkaz:
http://arxiv.org/abs/2305.17026
Autor:
Roberts, Jesse D., Jr.
Publikováno v:
In Nitric Oxide 1 June 2024 147:13-25