Zobrazeno 1 - 10
of 1 556
pro vyhledávání: '"Vulic, A."'
Current language models (LMs) use a fixed, static subword tokenizer. This default choice typically results in degraded efficiency and language capabilities, especially in languages other than English. To address this issue, we challenge the static de
Externí odkaz:
http://arxiv.org/abs/2411.18553
Recent research in Large Language Models (LLMs) has shown promising progress related to LLM alignment with human preferences. LLM-empowered decision-making systems are expected to be predictable, reliable and trustworthy, which implies being free fro
Externí odkaz:
http://arxiv.org/abs/2410.02205
Vision Language Models (VLMs) extend remarkable capabilities of text-only large language models and vision-only models, and are able to learn from and process multi-modal vision-text input. While modern VLMs perform well on a number of standard image
Externí odkaz:
http://arxiv.org/abs/2409.18023
Autor:
Moon, Hannah, Wik, Daniel R., Antoniou, V., Eracleous, M., Hornschemeier, Ann E., Lazzarini, Margaret, Lehmer, Bret D., Vulic, Neven, Williams, Benjamin F., Maccarone, T. J., Pottschmidt, K., Ptak, Andrew, Yukita, Mihoko, Zezas, Andreas
Publikováno v:
ApJ 970, 167, 2024
Using hard (E>10 keV) X-ray observations with NuSTAR, we are able to differentiate between accretion states, and thus compact object types, of neutron stars and black holes in X-ray binaries (XRBs) in M31, our nearest Milky Way-type neighbor. Using t
Externí odkaz:
http://arxiv.org/abs/2408.02828
Segmenting text into sentences plays an early and crucial role in many NLP systems. This is commonly achieved by using rule-based or statistical methods relying on lexical features such as punctuation. Although some recent works no longer exclusively
Externí odkaz:
http://arxiv.org/abs/2406.16678
LLMs have become a go-to solution not just for text generation, but also for natural language understanding (NLU) tasks. Acquiring extensive knowledge through language modeling on web-scale corpora, they excel on English NLU, yet struggle to extend t
Externí odkaz:
http://arxiv.org/abs/2406.12739
Large language models (LLMs) have shown promising abilities as cost-effective and reference-free evaluators for assessing language generation quality. In particular, pairwise LLM evaluators, which compare two generated texts and determine the preferr
Externí odkaz:
http://arxiv.org/abs/2406.11370
Top-view perspective denotes a typical way in which humans read and reason over different types of maps, and it is vital for localization and navigation of humans as well as of `non-human' agents, such as the ones backed by large Vision-Language Mode
Externí odkaz:
http://arxiv.org/abs/2406.02537
Autor:
Pang, Jiayun, Vulić, Ivan
Transformer-based encoder-decoder models have demonstrated impressive results in chemical reaction prediction tasks. However, these models typically rely on pretraining using tens of millions of unlabelled molecules, which can be time-consuming and G
Externí odkaz:
http://arxiv.org/abs/2405.10625
Language models (LMs) are bound to their tokenizer, which maps raw text to a sequence of vocabulary items (tokens). This restricts their flexibility: for example, LMs trained primarily on English may still perform well in other natural and programmin
Externí odkaz:
http://arxiv.org/abs/2405.07883