Zobrazeno 1 - 10
of 124
pro vyhledávání: '"JOHANSSON, MOA"'
Previous interpretations of language models (LMs) miss important distinctions in how these models process factual information. For example, given the query "Astrid Lindgren was born in" with the corresponding completion "Sweden", no difference is mad
Externí odkaz:
http://arxiv.org/abs/2410.14405
A central but unresolved aspect of problem-solving in AI is the capability to introduce and use abstractions, something humans excel at. Work in cognitive science has demonstrated that humans tend towards higher levels of abstraction when engaged in
Externí odkaz:
http://arxiv.org/abs/2409.20120
Autor:
Bruinsma, Bastiaan, Fredén, Annika, Hansson, Kajsa, Johansson, Moa, Kisić-Merino, Pasko, Saynova, Denitsa
This paper examines the development of the Artificial Intelligence (AI) meta-debate in Sweden before and after the release of ChatGPT. From the perspective of agenda-setting theory, we propose that it is an elite outside of party politics that is lea
Externí odkaz:
http://arxiv.org/abs/2409.16946
The emergence of mathematical concepts, such as number systems, is an understudied area in AI for mathematics and reasoning. It has previously been shown Carlsson et al. (2021) that by using reinforcement learning (RL), agents can derive simple appro
Externí odkaz:
http://arxiv.org/abs/2409.07170
We investigate how combinations of Large Language Models (LLMs) and symbolic analyses can be used to synthesise specifications of C programs. The LLM prompts are augmented with outputs from two formal methods tools in the Frama-C ecosystem, Pathcrawl
Externí odkaz:
http://arxiv.org/abs/2406.15540
Autor:
de Pieuchon, Nicolas Audinet, Daoud, Adel, Jerzak, Connor Thomas, Johansson, Moa, Johansson, Richard
We investigate the potential of large language models (LLMs) to disentangle text variables--to remove the textual traces of an undesired forbidden variable in a task sometimes known as text distillation and closely related to the fairness in AI and c
Externí odkaz:
http://arxiv.org/abs/2403.16584
Transformer language models are neural networks used for a wide variety of tasks concerning natural language, including some that also require logical reasoning. However, a transformer model may easily learn spurious patterns in the data, short-circu
Externí odkaz:
http://arxiv.org/abs/2403.11314
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
Large Language Models (LLMs) make natural interfaces to factual knowledge, but their usefulness is limited by their tendency to deliver inconsistent answers to semantically equivalent questions. For example, a model might predict both "Anne Redpath p
Externí odkaz:
http://arxiv.org/abs/2311.01307
Autor:
Johansson, Moa
The 2022 FIFA World Cup for men's football has received massive criticism since it was announced in December 2010 that Qatar would host it. With a significant history of human rights violations, many were shocked that Qatar was elected. Non-governmen
Externí odkaz:
http://urn.kb.se/resolve?urn=urn:nbn:se:ths:diva-1969
Autor:
Adolfsson, Lisa, Johansson, Moa
Externí odkaz:
http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-50811