Zobrazeno 1 - 10
of 110
pro vyhledávání: '"Misra, Kanishka"'
Property inheritance -- a phenomenon where novel properties are projected from higher level categories (e.g., birds) to lower level ones (e.g., sparrows) -- provides a unique window into how humans organize and deploy conceptual knowledge. It is deba
Externí odkaz:
http://arxiv.org/abs/2410.22590
Autor:
Misra, Kanishka, Kim, Najoung
Neural network language models (LMs) have been shown to successfully capture complex linguistic knowledge. However, their utility for understanding language acquisition is still debated. We contribute to this debate by presenting a case study where w
Externí odkaz:
http://arxiv.org/abs/2408.05086
Autor:
Misra, Kanishka, Mahowald, Kyle
Language models learn rare syntactic phenomena, but the extent to which this is attributable to generalization vs. memorization is a major open question. To that end, we iteratively trained transformer language models on systematically manipulated co
Externí odkaz:
http://arxiv.org/abs/2403.19827
Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements in the presence of experimental c
Externí odkaz:
http://arxiv.org/abs/2401.06640
Autor:
Misra, Kanishka, Kim, Najoung
Exemplar based accounts are often considered to be in direct opposition to pure linguistic abstraction in explaining language learners' ability to generalize to novel expressions. However, the recent success of neural network language models on lingu
Externí odkaz:
http://arxiv.org/abs/2312.03708
Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve u
Externí odkaz:
http://arxiv.org/abs/2306.04009
Autor:
Shi, Freda, Chen, Xinyun, Misra, Kanishka, Scales, Nathan, Dohan, David, Chi, Ed, Schärli, Nathanael, Zhou, Denny
Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In
Externí odkaz:
http://arxiv.org/abs/2302.00093
Autor:
Sinha, Koustuv, Gauthier, Jon, Mueller, Aaron, Misra, Kanishka, Fuentes, Keren, Levy, Roger, Williams, Adina
Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements w
Externí odkaz:
http://arxiv.org/abs/2212.08979
A characteristic feature of human semantic cognition is its ability to not only store and retrieve the properties of concepts observed through experience, but to also facilitate the inheritance of properties (can breathe) from superordinate concepts
Externí odkaz:
http://arxiv.org/abs/2210.01963
To what extent can experience from language contribute to our conceptual knowledge? Computational explorations of this question have shed light on the ability of powerful neural language models (LMs) -- informed solely through text input -- to encode
Externí odkaz:
http://arxiv.org/abs/2205.06910