Zobrazeno 1 - 10
of 182
pro vyhledávání: '"Varma, Sashank"'
Recent studies show evidence for emergent cognitive abilities in Large Pre-trained Language Models (PLMs). The increasing cognitive alignment of these models has made them candidates for cognitive science theories. Prior research into the emergent co
Externí odkaz:
http://arxiv.org/abs/2407.01047
How well do representations learned by ML models align with those of humans? Here, we consider concept representations learned by deep learning models and evaluate whether they show a fundamental behavioral signature of human concepts, the typicality
Externí odkaz:
http://arxiv.org/abs/2405.16128
Autor:
Li, Andrew, Feng, Xianle, Narang, Siddhant, Peng, Austin, Cai, Tianle, Shah, Raj Sanjay, Varma, Sashank
When reading temporarily ambiguous garden-path sentences, misinterpretations sometimes linger past the point of disambiguation. This phenomenon has traditionally been studied in psycholinguistic experiments using online measures such as reading times
Externí odkaz:
http://arxiv.org/abs/2405.16042
Category fluency is a widely studied cognitive phenomenon, yet two conflicting accounts have been proposed as the underlying retrieval mechanism -- an optimal foraging process deliberately searching through memory (Hills et al., 2012) and a random wa
Externí odkaz:
http://arxiv.org/abs/2405.06714
Cobweb, a human-like category learning system, differs from most cognitive science models in incrementally constructing hierarchically organized tree-like structures guided by the category utility measure. Prior studies have shown that Cobweb can cap
Externí odkaz:
http://arxiv.org/abs/2403.03835
Neural networks often suffer from catastrophic interference (CI): performance on previously learned tasks drops off significantly when learning a new task. This contrasts strongly with humans, who can continually learn new tasks without appreciably f
Externí odkaz:
http://arxiv.org/abs/2401.10393
Autor:
Gupta, Vima, Varma, Sashank
As children enter elementary school, their understanding of the ordinal structure of numbers transitions from a memorized count list of the first 50-100 numbers to knowing the successor function and understanding the countably infinite. We investigat
Externí odkaz:
http://arxiv.org/abs/2311.15194
Pre-trained Large Language Models (LLMs) have shown success in a diverse set of language inference and understanding tasks. The pre-training stage of LLMs looks at a large corpus of raw textual data. The BabyLM shared task compares LLM pre-training t
Externí odkaz:
http://arxiv.org/abs/2311.04666
Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how well popular
Externí odkaz:
http://arxiv.org/abs/2305.10782
Publikováno v:
In Cognitive Psychology September 2024 153