Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Gurgurov, Daniil"'
Contextualized embeddings based on large language models (LLMs) are available for various languages, but their coverage is often limited for lower resourced languages. Training LLMs for such languages is often difficult due to insufficient data and h
Externí odkaz:
http://arxiv.org/abs/2409.18193
Autor:
Gurgurov, Daniil, Morshnev, Aleksey
In this project, we train a vision encoder-decoder model to generate LaTeX code from images of mathematical formulas and text. Utilizing a diverse collection of image-to-LaTeX data, we build two models: a base model with a Swin Transformer encoder an
Externí odkaz:
http://arxiv.org/abs/2408.04015
Publikováno v:
2024.kallm-1.7
This paper explores the integration of graph knowledge from linguistic ontologies into multilingual Large Language Models (LLMs) using adapters to improve performance for low-resource languages (LRLs) in sentiment analysis (SA) and named entity recog
Externí odkaz:
http://arxiv.org/abs/2407.01406
Multilingual Large Language Models (LLMs) have gained large popularity among Natural Language Processing (NLP) researchers and practitioners. These models, trained on huge datasets, show proficiency across various languages and demonstrate effectiven
Externí odkaz:
http://arxiv.org/abs/2406.10602