Zobrazeno 1 - 10
of 5 258
pro vyhledávání: '"On, Sungchul"'
Autor:
Zhang, Zhehao, Rossi, Ryan, Yu, Tong, Dernoncourt, Franck, Zhang, Ruiyi, Gu, Jiuxiang, Kim, Sungchul, Chen, Xiang, Wang, Zichao, Lipka, Nedim
While vision-language models (VLMs) have demonstrated remarkable performance across various tasks combining textual and visual information, they continue to struggle with fine-grained visual perception tasks that require detailed pixel-level analysis
Externí odkaz:
http://arxiv.org/abs/2410.16400
Large language models (LLMs) have been used to generate query expansions augmenting original queries for improving information search. Recent studies also explore providing LLMs with initial retrieval results to generate query expansions more grounde
Externí odkaz:
http://arxiv.org/abs/2410.13765
In this paper, we present an effective data augmentation framework leveraging the Large Language Model (LLM) and Diffusion Model (DM) to tackle the challenges inherent in data-scarce scenarios. Recently, DMs have opened up the possibility of generati
Externí odkaz:
http://arxiv.org/abs/2409.16949
Autor:
Yao, Yuhang, Zhang, Jianyi, Wu, Junda, Huang, Chengkai, Xia, Yu, Yu, Tong, Zhang, Ruiyi, Kim, Sungchul, Rossi, Ryan, Li, Ang, Yao, Lina, McAuley, Julian, Chen, Yiran, Joe-Wong, Carlee
Large language models are rapidly gaining popularity and have been widely adopted in real-world applications. While the quality of training data is essential, privacy concerns arise during data collection. Federated learning offers a solution by allo
Externí odkaz:
http://arxiv.org/abs/2409.15723
Autor:
Owens, Deonna M., Rossi, Ryan A., Kim, Sungchul, Yu, Tong, Dernoncourt, Franck, Chen, Xiang, Zhang, Ruiyi, Gu, Jiuxiang, Deilamsalehy, Hanieh, Lipka, Nedim
Large Language Models (LLMs) are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities. Despite significant advancements in bias mitigation techniques using data augme
Externí odkaz:
http://arxiv.org/abs/2409.13884
Autor:
Wu, Junda, Zhang, Zhehao, Xia, Yu, Li, Xintong, Xia, Zhaoyang, Chang, Aaron, Yu, Tong, Kim, Sungchul, Rossi, Ryan A., Zhang, Ruiyi, Mitra, Subrata, Metaxas, Dimitris N., Yao, Lina, Shang, Jingbo, McAuley, Julian
Multimodal large language models (MLLMs) equip pre-trained large-language models (LLMs) with visual capabilities. While textual prompting in LLMs has been widely studied, visual prompting has emerged for more fine-grained and free-form visual instruc
Externí odkaz:
http://arxiv.org/abs/2409.15310
We study the Fokker-Planck equation for an active particle with both the radial and tangential forces and the perturbative force. We find the solution of the joint probability density. In the limit of the long-time domain and for the characteristic t
Externí odkaz:
http://arxiv.org/abs/2409.02475
Autor:
In, Yeonjun, Kim, Sungchul, Rossi, Ryan A., Tanjim, Md Mehrab, Yu, Tong, Sinha, Ritwik, Park, Chanyoung
The retrieval augmented generation (RAG) framework addresses an ambiguity in user queries in QA systems by retrieving passages that cover all plausible interpretations and generating comprehensive responses based on the passages. However, our prelimi
Externí odkaz:
http://arxiv.org/abs/2409.02361
Large Language Models (LLMs) have been achieving competent performance on a wide range of downstream tasks, yet existing work shows that inference on structured data is challenging for LLMs. This is because LLMs need to either understand long structu
Externí odkaz:
http://arxiv.org/abs/2407.02750
Autor:
An, Seunghwan, Woo, Gyeongdong, Lim, Jaesung, Kim, ChangHyun, Hong, Sungchul, Jeon, Jong-June
In this paper, our goal is to generate synthetic data for heterogeneous (mixed-type) tabular datasets with high machine learning utility (MLu). Since the MLu performance depends on accurately approximating the conditional distributions, we focus on d
Externí odkaz:
http://arxiv.org/abs/2405.20602