Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Kim, Junyeob"'
Autor:
Kim, Youna, Kim, Hyuhng Joon, Park, Cheonbok, Park, Choonghyun, Cho, Hyunsoo, Kim, Junyeob, Yoo, Kang Min, Lee, Sang-goo, Kim, Taeuk
When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge the gap between external knowledge and the LLMs' parametric knowledge. Recent research has been developed to amp
Externí odkaz:
http://arxiv.org/abs/2408.01084
Autor:
Park, Choonghyun, Kim, Hyuhng Joon, Kim, Junyeob, Kim, Youna, Kim, Taeuk, Cho, Hyunsoo, Jo, Hwiyeol, Lee, Sang-goo, Yoo, Kang Min
AI Generated Text (AIGT) detectors are developed with texts from humans and LLMs of common tasks. Despite the diversity of plausible prompt choices, these datasets are generally constructed with a limited number of prompts. The lack of prompt variati
Externí odkaz:
http://arxiv.org/abs/2406.16275
Autor:
Kim, Hyuhng Joon, Kim, Youna, Park, Cheonbok, Kim, Junyeob, Park, Choonghyun, Yoo, Kang Min, Lee, Sang-goo, Kim, Taeuk
In interactions between users and language model agents, user utterances frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack of exactness) to prioritize efficiency. This can lead to varying interpretations of the same inpu
Externí odkaz:
http://arxiv.org/abs/2404.11972
Autor:
Kim, Hyuhng Joon, Cho, Hyunsoo, Lee, Sang-Woo, Kim, Junyeob, Park, Choonghyun, Lee, Sang-goo, Yoo, Kang Min, Kim, Taeuk
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Doma
Externí odkaz:
http://arxiv.org/abs/2310.14849
Autor:
Cho, Hyunsoo, Kim, Hyuhng Joon, Kim, Junyeob, Lee, Sang-Woo, Lee, Sang-goo, Yoo, Kang Min, Kim, Taeuk
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by th
Externí odkaz:
http://arxiv.org/abs/2212.10873
Large-scale pre-trained language models (PLMs) are well-known for being capable of solving a task simply by conditioning a few input-label pairs dubbed demonstrations on a prompt without being explicitly tuned for the desired downstream task. Such a
Externí odkaz:
http://arxiv.org/abs/2206.08082
Autor:
Yoo, Kang Min, Kim, Junyeob, Kim, Hyuhng Joon, Cho, Hyunsoo, Jo, Hwiyeol, Lee, Sang-Woo, Lee, Sang-goo, Kim, Taeuk
Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive. Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as
Externí odkaz:
http://arxiv.org/abs/2205.12685
Autor:
Kim, Junyeob, Lee, Eun-Jung, Lee, Kyung-Eun, Nho, Youn-Hwa, Ryu, Jeoungjin, Kim, Su Young, Yoo, Jeong Kyun, Kang, Seunghyun, Seo, Sang Woo
Publikováno v:
In Computational and Structural Biotechnology Journal 2023 21:2009-2017
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.