Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Yang, June Yong"'
Autor:
Kim, Yoonjeon, Ryu, Soohyun, Jung, Yeonsung, Lee, Hyunkoo, Kim, Joowon, Yang, June Yong, Hwang, Jaeryong, Yang, Eunho
The development of vision-language and generative models has significantly advanced text-guided image editing, which seeks \textit{preservation} of core elements in the source image while implementing \textit{modifications} based on the target text.
Externí odkaz:
http://arxiv.org/abs/2410.11374
Autor:
Jang, Doohyuk, Park, Sihwan, Yang, June Yong, Jung, Yeonsung, Yun, Jihun, Kundu, Souvik, Kim, Sung-Yub, Yang, Eunho
Auto-Regressive (AR) models have recently gained prominence in image generation, often matching or even surpassing the performance of diffusion models. However, one major limitation of AR models is their sequential nature, which processes tokens one
Externí odkaz:
http://arxiv.org/abs/2410.03355
Autor:
Lee, Jung Hyun, Kim, Jeonghoon, Yang, June Yong, Kwon, Se Jung, Yang, Eunho, Yoo, Kang Min, Lee, Dongsoo
With the commercialization of large language models (LLMs), weight-activation quantization has emerged to compress and accelerate LLMs, achieving high throughput while reducing inference costs. However, existing post-training quantization (PTQ) techn
Externí odkaz:
http://arxiv.org/abs/2407.11534
In real-world scenarios, tabular data often suffer from distribution shifts that threaten the performance of machine learning models. Despite its prevalence and importance, handling distribution shifts in the tabular domain remains underexplored due
Externí odkaz:
http://arxiv.org/abs/2407.10784
Large Language Models (LLMs) have demonstrated impressive problem-solving capabilities in mathematics through step-by-step reasoning chains. However, they are susceptible to reasoning errors that impact the quality of subsequent reasoning chains and
Externí odkaz:
http://arxiv.org/abs/2407.12863
Recent advancements in text-attributed graphs (TAGs) have significantly improved the quality of node features by using the textual modeling capabilities of language models. Despite this success, utilizing text attributes to enhance the predefined gra
Externí odkaz:
http://arxiv.org/abs/2405.18581
Autor:
Yang, June Yong, Kim, Byeongwook, Bae, Jeongin, Kwon, Beomseok, Park, Gunho, Yang, Eunho, Kwon, Se Jung, Lee, Dongsoo
Key-Value (KV) Caching has become an essential technique for accelerating the inference speed and throughput of generative Large Language Models~(LLMs). However, the memory footprint of the KV cache poses a critical bottleneck in LLM deployment as th
Externí odkaz:
http://arxiv.org/abs/2402.18096
The Mixup scheme suggests mixing a pair of samples to create an augmented training sample and has gained considerable attention recently for improving the generalizability of neural networks. A straightforward and widely used extension of Mixup is to
Externí odkaz:
http://arxiv.org/abs/2112.08796
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, often rely heavily on malignant bias as shortcuts instead of task-related information for discriminative tasks. To address this problem, recent studie
Externí odkaz:
http://arxiv.org/abs/2112.01021
Neural networks embedded in safety-sensitive applications such as self-driving cars and wearable health monitors rely on two important techniques: input attribution for hindsight analysis and network compression to reduce its size for edge-computing.
Externí odkaz:
http://arxiv.org/abs/2010.15054