Zobrazeno 1 - 10
of 172
pro vyhledávání: '"Huth, Alexander"'
Autor:
Benara, Vinamra, Singh, Chandan, Morris, John X., Antonello, Richard, Stoica, Ion, Huth, Alexander G., Gao, Jianfeng
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks. However, their opaqueness and proliferation into scientific domains such as neuroscience have created a growing need for inte
Externí odkaz:
http://arxiv.org/abs/2405.16714
Brain-computer interfaces have promising medical and scientific applications for aiding speech and studying the brain. In this work, we propose an information-based evaluation metric for brain-to-text decoders. Using this metric, we examine two metho
Externí odkaz:
http://arxiv.org/abs/2405.14055
Language models that are trained on the next-word prediction task have been shown to accurately model human behavior in word prediction and reading speed. In contrast with these findings, we present a scenario in which the performance of humans and L
Externí odkaz:
http://arxiv.org/abs/2310.06408
Encoding models have been used to assess how the human brain represents concepts in language and vision. While language and vision rely on similar concept representations, current encoding models are typically trained and tested on brain responses to
Externí odkaz:
http://arxiv.org/abs/2305.12248
Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language m
Externí odkaz:
http://arxiv.org/abs/2305.11863
Autor:
Singh, Chandan, Hsu, Aliyah R., Antonello, Richard, Jain, Shailee, Huth, Alexander G., Yu, Bin, Gao, Jianfeng
Large language models (LLMs) have demonstrated remarkable prediction performance for a growing array of tasks. However, their rapid proliferation and increasing opaqueness have created a growing need for interpretability. Here, we ask whether we can
Externí odkaz:
http://arxiv.org/abs/2305.09863
Self-supervised language models are very effective at predicting high-level cortical responses during language comprehension. However, the best current models of lower-level auditory processing in the human brain rely on either hand-constructed acous
Externí odkaz:
http://arxiv.org/abs/2205.14252
How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure a
Externí odkaz:
http://arxiv.org/abs/2106.05426
All hand-object interaction is controlled by forces that the two bodies exert on each other, but little work has been done in modeling these underlying forces when doing pose and contact estimation from RGB/RGB-D data. Given the pose of the hand and
Externí odkaz:
http://arxiv.org/abs/2105.08196
Publikováno v:
International Conference on Learning Representations 2021
Language models must capture statistical dependencies between words at timescales ranging from very short to very long. Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power
Externí odkaz:
http://arxiv.org/abs/2009.12727