Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Lee, Saehyung"'
Multi-hop reasoning, which requires multi-step reasoning based on the supporting documents within a given context, remains challenging for large language models (LLMs). LLMs often struggle to filter out irrelevant documents within the context, and th
Externí odkaz:
http://arxiv.org/abs/2410.07103
In our study, we explore methods for detecting unwanted content lurking in visual datasets. We provide a theoretical analysis demonstrating that a model capable of successfully partitioning visual data can be obtained using only textual data. Based o
Externí odkaz:
http://arxiv.org/abs/2409.19840
In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. First, by r
Externí odkaz:
http://arxiv.org/abs/2406.03411
Autor:
Lee, Jonghyun, Jung, Dahuin, Lee, Saehyung, Park, Junsung, Shin, Juhyeon, Hwang, Uiwon, Yoon, Sungroh
Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data. The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA methods have
Externí odkaz:
http://arxiv.org/abs/2403.07366
Autor:
Shin, Juhyeon, Lee, Jonghyun, Lee, Saehyung, Park, Minjun, Lee, Dongjun, Hwang, Uiwon, Yoon, Sungroh
In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label. We developed a grad
Externí odkaz:
http://arxiv.org/abs/2402.09004
The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem. Existing methodologies aimed to enhance robust fairness by sacrificing the model's performance o
Externí odkaz:
http://arxiv.org/abs/2401.12532
Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models. However, existing CLIP-guided image morphing methods encounter difficult
Externí odkaz:
http://arxiv.org/abs/2401.10526
Successful detection of Out-of-Distribution (OoD) data is becoming increasingly important to ensure safe deployment of neural networks. One of the main challenges in OoD detection is that neural networks output overconfident predictions on OoD data,
Externí odkaz:
http://arxiv.org/abs/2310.16492
Autor:
Lee, Saehyung, Lee, Hyungyu
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness. However, there is no guarantee that it will always be possible to obtain sufficient extra data for a selected dataset. In
Externí odkaz:
http://arxiv.org/abs/2209.14053
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensation (DC), methods can achieve state-of-the-art performance when applied to data-efficient learning tasks. However, in this study, we prove that the e
Externí odkaz:
http://arxiv.org/abs/2202.02916