Zobrazeno 1 - 10
of 756
pro vyhledávání: '"Lee, HyunJi"'
Autor:
Lee, Seongyun, Kim, Geewook, Kim, Jiyeon, Lee, Hyunji, Chang, Hoyeon, Park, Sue Hyun, Seo, Minjoon
Vision-Language adaptation (VL adaptation) transforms Large Language Models (LLMs) into Large Vision-Language Models (LVLMs) for multimodal tasks, but this process often compromises the inherent safety capabilities embedded in the original LLMs. Desp
Externí odkaz:
http://arxiv.org/abs/2410.07571
Autor:
Kim, Jiyeon, Lee, Hyunji, Cho, Hyowon, Jang, Joel, Hwang, Hyeonbin, Won, Seungpil, Ahn, Youbin, Lee, Dohaeng, Seo, Minjoon
In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We int
Externí odkaz:
http://arxiv.org/abs/2410.01380
Information retrieval methods often rely on a single embedding model trained on large, general-domain datasets like MSMARCO. While this approach can produce a retriever with reasonable overall performance, models trained on domain-specific data often
Externí odkaz:
http://arxiv.org/abs/2409.02685
Autor:
Lee, Hyunji, Kim, Doyoung, Jun, Jihoon, Joo, Sejune, Jang, Joel, On, Kyoung-Woon, Seo, Minjoon
In this work, we introduce a semiparametric token-sequence co-supervision training method. It trains a language model by simultaneously leveraging supervision from the traditional next token prediction loss which is calculated over the parametric tok
Externí odkaz:
http://arxiv.org/abs/2403.09024
Autor:
Oh, Hanseok, Lee, Hyunji, Ye, Seonghyeon, Shin, Haebin, Jang, Hansol, Jun, Changwook, Seo, Minjoon
Despite the critical need to align search targets with users' intention, retrievers often only prioritize query information without delving into the users' intended search context. Enhancing the capability of retrievers to understand intentions and p
Externí odkaz:
http://arxiv.org/abs/2402.14334
Prevailing research practice today often relies on training dense retrievers on existing large datasets such as MSMARCO and then experimenting with ways to improve zero-shot generalization capabilities to unseen domains. While prior work has tackled
Externí odkaz:
http://arxiv.org/abs/2311.09765
Autor:
Lee, Hyunji, Joo, Sejune, Kim, Chaeeun, Jang, Joel, Kim, Doyoung, On, Kyoung-Woon, Seo, Minjoon
To reduce issues like hallucinations and lack of control in Large Language Models (LLMs), a common method is to generate responses by grounding on external contexts given as input, known as knowledge-augmented models. However, previous research often
Externí odkaz:
http://arxiv.org/abs/2311.09069
We introduce a new problem KTRL+F, a knowledge-augmented in-document search task that necessitates real-time identification of all semantic targets within a document with the awareness of external sources through a single natural query. KTRL+F addres
Externí odkaz:
http://arxiv.org/abs/2311.08329
Dense video captioning, a task of localizing meaningful moments and generating relevant captions for videos, often requires a large, expensive corpus of annotated video segments paired with text. In an effort to minimize the annotation cost, we propo
Externí odkaz:
http://arxiv.org/abs/2307.02682
Publikováno v:
CVPR 2023
3D content manipulation is an important computer vision task with many real-world applications (e.g., product design, cartoon generation, and 3D Avatar editing). Recently proposed 3D GANs can generate diverse photorealistic 3D-aware contents using Ne
Externí odkaz:
http://arxiv.org/abs/2306.12570