Zobrazeno 1 - 10
of 105
pro vyhledávání: '"Kim, Taeuk"'
Autor:
Kim, Youna, Kim, Hyuhng Joon, Park, Cheonbok, Park, Choonghyun, Cho, Hyunsoo, Kim, Junyeob, Yoo, Kang Min, Lee, Sang-goo, Kim, Taeuk
When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge the gap between external knowledge and the LLMs' parametric knowledge. Recent research has been developed to amp
Externí odkaz:
http://arxiv.org/abs/2408.01084
Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC). However, most PLM-based methods focus solely on encoding textual information, neglecting the long-tailed nature of knowledge gr
Externí odkaz:
http://arxiv.org/abs/2407.12703
Modular programming, which aims to construct the final program by integrating smaller, independent building blocks, has been regarded as a desirable practice in software development. However, with the rise of recent code generation agents built upon
Externí odkaz:
http://arxiv.org/abs/2407.11406
Autor:
Park, Choonghyun, Kim, Hyuhng Joon, Kim, Junyeob, Kim, Youna, Kim, Taeuk, Cho, Hyunsoo, Jo, Hwiyeol, Lee, Sang-goo, Yoo, Kang Min
AI Generated Text (AIGT) detectors are developed with texts from humans and LLMs of common tasks. Despite the diversity of plausible prompt choices, these datasets are generally constructed with a limited number of prompts. The lack of prompt variati
Externí odkaz:
http://arxiv.org/abs/2406.16275
Autor:
Kim, Hyuhng Joon, Kim, Youna, Park, Cheonbok, Kim, Junyeob, Park, Choonghyun, Yoo, Kang Min, Lee, Sang-goo, Kim, Taeuk
In interactions between users and language model agents, user utterances frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack of exactness) to prioritize efficiency. This can lead to varying interpretations of the same inpu
Externí odkaz:
http://arxiv.org/abs/2404.11972
Task-oriented dialogue (TOD) systems are commonly designed with the presumption that each utterance represents a single intent. However, this assumption may not accurately reflect real-world situations, where users frequently express multiple intents
Externí odkaz:
http://arxiv.org/abs/2403.18277
While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grain
Externí odkaz:
http://arxiv.org/abs/2403.09490
The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing thi
Externí odkaz:
http://arxiv.org/abs/2402.13562
Cross-lingual transfer (XLT) is an emergent ability of multilingual language models that preserves their performance on a task to a significant extent when evaluated in languages that were not included in the fine-tuning process. While English, due t
Externí odkaz:
http://arxiv.org/abs/2310.17166
Autor:
Kim, Hyuhng Joon, Cho, Hyunsoo, Lee, Sang-Woo, Kim, Junyeob, Park, Choonghyun, Lee, Sang-goo, Yoo, Kang Min, Kim, Taeuk
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Doma
Externí odkaz:
http://arxiv.org/abs/2310.14849