Zobrazeno 1 - 10
of 299
pro vyhledávání: '"Zhang Ruiyi"'
Large-scale general domain pretraining followed by downstream-specific finetuning has become a predominant paradigm in machine learning. However, discrepancies between the pretraining and target domains can still lead to performance degradation in ce
Externí odkaz:
http://arxiv.org/abs/2410.10006
Parameter-efficient fine-tuning (PEFT) of large language models (LLMs) has gained considerable attention as a flexible and efficient way of adapting LLMs to downstream tasks. Among these methods, weighted decomposed low-rank adaptation (DoRA) has eme
Externí odkaz:
http://arxiv.org/abs/2410.09758
Autor:
Chen, Jian, Zhang, Ruiyi, Zhou, Yufan, Healey, Jennifer, Gu, Jiuxiang, Xu, Zhiqiang, Chen, Changyou
Automatic generation of graphical layouts is crucial for many real-world applications, including designing posters, flyers, advertisements, and graphical user interfaces. Given the incredible ability of Large language models (LLMs) in both natural la
Externí odkaz:
http://arxiv.org/abs/2410.12844
Autor:
Yao, Yuhang, Zhang, Jianyi, Wu, Junda, Huang, Chengkai, Xia, Yu, Yu, Tong, Zhang, Ruiyi, Kim, Sungchul, Rossi, Ryan, Li, Ang, Yao, Lina, McAuley, Julian, Chen, Yiran, Joe-Wong, Carlee
Large language models are rapidly gaining popularity and have been widely adopted in real-world applications. While the quality of training data is essential, privacy concerns arise during data collection. Federated learning offers a solution by allo
Externí odkaz:
http://arxiv.org/abs/2409.15723
Autor:
Owens, Deonna M., Rossi, Ryan A., Kim, Sungchul, Yu, Tong, Dernoncourt, Franck, Chen, Xiang, Zhang, Ruiyi, Gu, Jiuxiang, Deilamsalehy, Hanieh, Lipka, Nedim
Large Language Models (LLMs) are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities. Despite significant advancements in bias mitigation techniques using data augme
Externí odkaz:
http://arxiv.org/abs/2409.13884
Autor:
Wu, Junda, Zhang, Zhehao, Xia, Yu, Li, Xintong, Xia, Zhaoyang, Chang, Aaron, Yu, Tong, Kim, Sungchul, Rossi, Ryan A., Zhang, Ruiyi, Mitra, Subrata, Metaxas, Dimitris N., Yao, Lina, Shang, Jingbo, McAuley, Julian
Multimodal large language models (MLLMs) equip pre-trained large-language models (LLMs) with visual capabilities. While textual prompting in LLMs has been widely studied, visual prompting has emerged for more fine-grained and free-form visual instruc
Externí odkaz:
http://arxiv.org/abs/2409.15310
Autor:
An, Bang, Zhu, Sicheng, Zhang, Ruiyi, Panaitescu-Liess, Michael-Andrei, Xu, Yuancheng, Huang, Furong
Safety-aligned large language models (LLMs) sometimes falsely refuse pseudo-harmful prompts, like "how to kill a mosquito," which are actually harmless. Frequent false refusals not only frustrate users but also provoke a public backlash against the v
Externí odkaz:
http://arxiv.org/abs/2409.00598
Large multimodal models (LMMs) have demonstrated impressive capabilities in understanding various types of image, including text-rich images. Most existing text-rich image benchmarks are simple extraction-based question answering, and many LMMs now e
Externí odkaz:
http://arxiv.org/abs/2408.14594
Publikováno v:
Frontiers in Psychology, Vol 13 (2022)
Previous studies have focused on the relationship between imaginary companions (ICs) and children’s social developments. As far as we know, few studies have focused on the relationship between ICs and children’s agency attributions. This study ai
Externí odkaz:
https://doaj.org/article/1bd2e6b47fd548d5a84df35b806311e3
Large multimodal language models have demonstrated impressive capabilities in understanding and manipulating images. However, many of these models struggle with comprehending intensive textual contents embedded within the images, primarily due to the
Externí odkaz:
http://arxiv.org/abs/2407.19185