Zobrazeno 1 - 10
of 3 278
pro vyhledávání: '"Jiang, Meng"'
Autor:
Yang, Tianyu, Dai, Lisen, Liu, Zheyuan, Wang, Xiangqi, Jiang, Meng, Tian, Yapeng, Zhang, Xiangliang
Machine unlearning (MU) has gained significant attention as a means to remove specific data from trained models without requiring a full retraining process. While progress has been made in unimodal domains like text and image classification, unlearni
Externí odkaz:
http://arxiv.org/abs/2410.23330
Autor:
Liu, Zheyuan, Dou, Guangyao, Jia, Mengzhao, Tan, Zhaoxuan, Zeng, Qingkai, Yuan, Yongle, Jiang, Meng
Generative models such as Large Language Models (LLM) and Multimodal Large Language models (MLLMs) trained on massive web corpora can memorize and disclose individuals' confidential and private data, raising legal and ethical concerns. While many pre
Externí odkaz:
http://arxiv.org/abs/2410.22108
Autor:
Jin, Yilun, Li, Zheng, Zhang, Chenwei, Cao, Tianyu, Gao, Yifan, Jayarao, Pratik, Li, Mao, Liu, Xin, Sarkhel, Ritesh, Tang, Xianfeng, Wang, Haodong, Wang, Zhengyang, Xu, Wenju, Yang, Jingfeng, Yin, Qingyu, Li, Xian, Nigam, Priyanka, Xu, Yi, Chen, Kai, Yang, Qiang, Jiang, Meng, Yin, Bing
Online shopping is a complex multi-task, few-shot learning problem with a wide and evolving range of entities, relations, and tasks. However, existing models and benchmarks are commonly tailored to specific tasks, falling short of capturing the full
Externí odkaz:
http://arxiv.org/abs/2410.20745
Autor:
Szymanski, Annalisa, Ziems, Noah, Eicher-Miller, Heather A., Li, Toby Jia-Jun, Jiang, Meng, Metoyer, Ronald A.
The potential of using Large Language Models (LLMs) themselves to evaluate LLM outputs offers a promising method for assessing model performance across various contexts. Previous research indicates that LLM-as-a-judge exhibits a strong correlation wi
Externí odkaz:
http://arxiv.org/abs/2410.20266
Multimodal Large Language Models (MLLMs) have demonstrated impressive abilities across various tasks, including visual question answering and chart comprehension, yet existing benchmarks for chart-related tasks fall short in capturing the complexity
Externí odkaz:
http://arxiv.org/abs/2410.14179
Best-of-N decoding methods instruct large language models (LLMs) to generate multiple solutions, score each using a scoring function, and select the highest scored as the final answer to mathematical reasoning problems. However, this repeated indepen
Externí odkaz:
http://arxiv.org/abs/2410.12934
Autor:
Zhai, Wei, Bai, Nan, Zhao, Qing, Li, Jianqiang, Wang, Fan, Qi, Hongzhi, Jiang, Meng, Wang, Xiaoqin, Yang, Bing Xiang, Fu, Guanghui
As the prevalence of mental health challenges, social media has emerged as a key platform for individuals to express their emotions.Deep learning tends to be a promising solution for analyzing mental health on social media. However, black box models
Externí odkaz:
http://arxiv.org/abs/2410.10323
Evaluating the ability of large language models (LLMs) to follow complex human-written instructions is essential for their deployment in real-world applications. While benchmarks like Chatbot Arena use human judges to assess model performance, they a
Externí odkaz:
http://arxiv.org/abs/2410.06089
While large language models (LLMs) have integrated images, adapting them to graphs remains challenging, limiting their applications in materials and drug design. This difficulty stems from the need for coherent autoregressive generation across texts
Externí odkaz:
http://arxiv.org/abs/2410.04223
Autor:
Jia, Mengzhao, Yu, Wenhao, Ma, Kaixin, Fang, Tianqing, Zhang, Zhihan, Ouyang, Siru, Zhang, Hongming, Jiang, Meng, Yu, Dong
Text-rich images, where text serves as the central visual element guiding the overall understanding, are prevalent in real-world applications, such as presentation slides, scanned documents, and webpage snapshots. Tasks involving multiple text-rich i
Externí odkaz:
http://arxiv.org/abs/2410.01744