Zobrazeno 1 - 10
of 973
pro vyhledávání: '"A Toneva"'
With the recent advances in AI programming assistants such as GitHub Copilot, programming is not limited to classical programming languages anymore--programming tasks can also be expressed and solved by end-users in natural text. Despite the availabi
Externí odkaz:
http://arxiv.org/abs/2412.12471
We introduce SLayR, Scene Layout Generation with Rectified flow. State-of-the-art text-to-image models achieve impressive results. However, they generate images end-to-end, exposing no fine-grained control over the process. SLayR presents a novel tra
Externí odkaz:
http://arxiv.org/abs/2412.05003
Speech language models align with human brain responses to natural language to an impressive degree. However, current models rely heavily on low-level speech features, indicating they lack brain-relevant semantics which limits their utility as model
Externí odkaz:
http://arxiv.org/abs/2410.09230
Autor:
Pink, Mathis, Vo, Vy A., Wu, Qinyuan, Mu, Jianing, Turek, Javier S., Hasson, Uri, Norman, Kenneth A., Michelmann, Sebastian, Huth, Alexander, Toneva, Mariya
Current LLM benchmarks focus on evaluating models' memory of facts and semantic relations, primarily assessing semantic aspects of long-term memory. However, in humans, long-term memory also includes episodic memory, which links memories to their con
Externí odkaz:
http://arxiv.org/abs/2410.08133
Autor:
Dong, Dota Tianai, Toneva, Mariya
Integrating information from multiple modalities is arguably one of the essential prerequisites for grounding artificial intelligence systems with an understanding of the real world. Recent advances in video transformers that jointly learn from visio
Externí odkaz:
http://arxiv.org/abs/2311.07766
Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of in
Externí odkaz:
http://arxiv.org/abs/2311.04664
Autor:
Rawal, Ruchit, Toneva, Mariya
The rapid growth in natural language processing (NLP) research has led to numerous new models, outpacing our understanding of how they compare to established ones. One major reason for this difficulty is saturating benchmarks, which may not well refl
Externí odkaz:
http://arxiv.org/abs/2311.04166
Autor:
Sucholutsky, Ilia, Muttenthaler, Lukas, Weller, Adrian, Peng, Andi, Bobu, Andreea, Kim, Been, Love, Bradley C., Cueva, Christopher J., Grant, Erin, Groen, Iris, Achterberg, Jascha, Tenenbaum, Joshua B., Collins, Katherine M., Hermann, Katherine L., Oktar, Kerem, Greff, Klaus, Hebart, Martin N., Cloos, Nathan, Kriegeskorte, Nikolaus, Jacoby, Nori, Zhang, Qiuyi, Marjieh, Raja, Geirhos, Robert, Chen, Sherol, Kornblith, Simon, Rane, Sunayana, Konkle, Talia, O'Connell, Thomas P., Unterthiner, Thomas, Lampinen, Andrew K., Müller, Klaus-Robert, Toneva, Mariya, Griffiths, Thomas L.
Biological and artificial information processing systems form representations of the world that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the similarity between the representations formed by these dive
Externí odkaz:
http://arxiv.org/abs/2310.13018
The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning. While pretraining is empirically observed to be beneficial for a r
Externí odkaz:
http://arxiv.org/abs/2307.06006
With the increasing reliance on deep neural networks, it is important to develop ways to better understand their learned representations. Representation similarity measures have emerged as a popular tool for examining learned representations However,
Externí odkaz:
http://arxiv.org/abs/2305.19294