Zobrazeno 1 - 10
of 9 083
pro vyhledávání: '"A. Ohi"'
Vision-language models (VLMs) have shown impressive abilities in text and image understanding. However, existing metrics for evaluating the text generated by VLMs focus exclusively on overall quality, leading to two limitations: 1) it is challenging
Externí odkaz:
http://arxiv.org/abs/2412.14613
Autor:
Saito, Koshiro, Mizuki, Sakae, Ohi, Masanari, Nakamura, Taishi, Shiotani, Taihei, Maeda, Koki, Ma, Youmi, Hattori, Kakeru, Fujii, Kazuki, Okamoto, Takumi, Ishida, Shigeki, Takamura, Hiroya, Yokota, Rio, Okazaki, Naoaki
Why do we build local large language models (LLMs)? What should a local LLM learn from the target language? Which abilities can be transferred from other languages? Do language-specific scaling laws exist? To explore these research questions, we eval
Externí odkaz:
http://arxiv.org/abs/2412.14471
Recently, Text-to-speech (TTS) models based on large language models (LLMs) that translate natural language text into sequences of discrete audio tokens have gained great research attention, with advances in neural audio codec (NAC) models using resi
Externí odkaz:
http://arxiv.org/abs/2410.04380
Self-supervised learning has emerged as a key approach for learning generic representations from speech data. Despite promising results in downstream tasks such as speech recognition, speaker verification, and emotion recognition, a significant numbe
Externí odkaz:
http://arxiv.org/abs/2407.21066
Autor:
Fujii, Kazuki, Nakamura, Taishi, Loem, Mengsay, Iida, Hiroki, Ohi, Masanari, Hattori, Kakeru, Shota, Hirai, Mizuki, Sakae, Yokota, Rio, Okazaki, Naoaki
Cross-lingual continual pre-training of large language models (LLMs) initially trained on English corpus allows us to leverage the vast amount of English language resources and reduce the pre-training cost. In this study, we constructed Swallow, an L
Externí odkaz:
http://arxiv.org/abs/2404.17790
Autor:
Okazaki, Naoaki, Hattori, Kakeru, Shota, Hirai, Iida, Hiroki, Ohi, Masanari, Fujii, Kazuki, Nakamura, Taishi, Loem, Mengsay, Yokota, Rio, Mizuki, Sakae
Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus
Externí odkaz:
http://arxiv.org/abs/2404.17733
The increasing use of autonomous robot systems in hazardous environments underscores the need for efficient search and rescue operations. Despite significant advancements, existing literature on object search often falls short in overcoming the diffi
Externí odkaz:
http://arxiv.org/abs/2404.04186
Publikováno v:
ACL2024 (findings)
Large Language Models (LLMs) are widely used to evaluate natural language generation tasks as automated metrics. However, the likelihood, a measure of LLM's plausibility for a sentence, can vary due to superficial differences in sentences, such as wo
Externí odkaz:
http://arxiv.org/abs/2402.15987
Autor:
Marcel, Miracle Chibuzor, Sani, Idris Abubakar, Gerald, Jorbedom Leelabari, Pius, Privatus, Ekwu, Ohi Mary, Bvumbwe, Bauleni, Sudum, Esaenwi, Olayiwola, Joy Ugonma
We report new measurements of the position angle and separation of the double star WDS 03245+5938 STI 450, based on our observations, Gaia EDR3, and historical data. We find that the position angle and separation are 209.7{\deg} and 7.68", respective
Externí odkaz:
http://arxiv.org/abs/2312.13566
Autor:
Marcel, Miracle Chibuzor, Gerald, Jorbedom Leelabari, Bvumbwe, Bauleni, Sani, Idris Abubakar, Pius, Privatus, Ekwu, Ohi Mary, Sudum, Esaenwi, Olayiwola, Joy Ugonma
WDS 03286+2523 BRT 133, is a double-star system that has been under observation since 1896. In this study, we present new measurements of the position angle and separation of the system, utilizing data obtained from a web telescope with a Charged Cou
Externí odkaz:
http://arxiv.org/abs/2312.12707