Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Baeumel, Tanja"'
Multilingual Large Language Models (LLMs) have gained large popularity among Natural Language Processing (NLP) researchers and practitioners. These models, trained on huge datasets, show proficiency across various languages and demonstrate effectiven
Externí odkaz:
http://arxiv.org/abs/2406.10602
Pre-trained Language Models (PLMs) have shown to be consistently successful in a plethora of NLP tasks due to their ability to learn contextualized representations of words (Ethayarajh, 2019). BERT (Devlin et al., 2018), ELMo (Peters et al., 2018) an
Externí odkaz:
http://arxiv.org/abs/2312.06514
Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Nevertheless, they are essentially black boxes: Humans do not have a clear understanding of what knowledge is encoded in different parts of the models, especi
Externí odkaz:
http://arxiv.org/abs/2311.08240