Zobrazeno 1 - 10
of 70
pro vyhledávání: '"Kim, Joo Kyung"'
Knowledge graph-grounded dialog generation requires retrieving a dialog-relevant subgraph from the given knowledge base graph and integrating it with the dialog history. Previous works typically represent the graph using an external encoder, such as
Externí odkaz:
http://arxiv.org/abs/2410.09350
Autor:
Hayati, Shirley Anugrah, Jung, Taehee, Bodding-Long, Tristan, Kar, Sudipta, Sethy, Abhinav, Kim, Joo-Kyung, Kang, Dongyeop
Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model's generalization to different tasks, even for unseen tasks. However, most existing instruction datasets include only single instructio
Externí odkaz:
http://arxiv.org/abs/2402.11532
Visual Question Answering (VQA) often involves diverse reasoning scenarios across Vision and Language (V&L). Most prior VQA studies, however, have merely focused on assessing the model's overall accuracy without evaluating it on different reasoning c
Externí odkaz:
http://arxiv.org/abs/2402.11058
Chain-of-Thought (CoT) prompting along with sub-question generation and answering has enhanced multi-step reasoning capabilities of Large Language Models (LLMs). However, prompting the LLMs to directly generate sub-questions is suboptimal since they
Externí odkaz:
http://arxiv.org/abs/2311.09762
Publikováno v:
EACL 2023
For extreme multi-label classification (XMC), existing classification-based models poorly perform for tail labels and often ignore the semantic relations among labels, like treating "Wikipedia" and "Wiki" as independent and separate labels. In this p
Externí odkaz:
http://arxiv.org/abs/2302.09150
A large-scale conversational agent can suffer from understanding user utterances with various ambiguities such as ASR ambiguity, intent ambiguity, and hypothesis ambiguity. When ambiguities are detected, the agent should engage in a clarifying dialog
Externí odkaz:
http://arxiv.org/abs/2109.12451
Autor:
Kim, Joo-Kyung, Kim, Young-Bum
In large-scale domain classification, an utterance can be handled by multiple domains with overlapped capabilities. However, only a limited number of ground-truth domains are provided for each training utterance in practice while knowing as many as c
Externí odkaz:
http://arxiv.org/abs/2003.03728
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Kim, Joo-Kyung, Kim, Young-Bum
In large-scale domain classification for natural language understanding, leveraging each user's domain enablement information, which refers to the preferred or authenticated domains by the user, with attention mechanism has been shown to improve the
Externí odkaz:
http://arxiv.org/abs/1812.07546
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.