Zobrazeno 1 - 10
of 1 650
pro vyhledávání: '"Wang, YuJing"'
Autor:
Yang, Yaming, Muhtar, Dilxat, Shen, Yelong, Zhan, Yuefeng, Liu, Jianfeng, Wang, Yujing, Sun, Hao, Deng, Denvy, Sun, Feng, Zhang, Qi, Chen, Weizhu, Tong, Yunhai
Parameter-efficient fine-tuning (PEFT) has been widely employed for domain adaptation, with LoRA being one of the most prominent methods due to its simplicity and effectiveness. However, in multi-task learning (MTL) scenarios, LoRA tends to obscure t
Externí odkaz:
http://arxiv.org/abs/2410.09437
In a real-world RAG system, the current query often involves spoken ellipses and ambiguous references from dialogue contexts, necessitating query rewriting to better describe user's information needs. However, traditional context-based rewriting has
Externí odkaz:
http://arxiv.org/abs/2408.17072
Autor:
Yang, Tianmeng, Meng, Jiahao, Zhou, Min, Yang, Yaming, Wang, Yujing, Li, Xiangtai, Tong, Yunhai
Recent research on the robustness of Graph Neural Networks (GNNs) under noises or attacks has attracted great attention due to its importance in real-world applications. Most previous methods explore a single noise source, recovering corrupt node emb
Externí odkaz:
http://arxiv.org/abs/2408.00700
Federated learning is highly susceptible to model poisoning attacks, especially those meticulously crafted for servers. Traditional defense methods mainly focus on updating assessments or robust aggregation against manually crafted myopic attacks. Wh
Externí odkaz:
http://arxiv.org/abs/2406.14217
Autor:
Chen, Qi, Geng, Xiubo, Rosset, Corby, Buractaon, Carolyn, Lu, Jingwen, Shen, Tao, Zhou, Kun, Xiong, Chenyan, Gong, Yeyun, Bennett, Paul, Craswell, Nick, Xie, Xing, Yang, Fan, Tower, Bryan, Rao, Nikhil, Dong, Anlei, Jiang, Wenqi, Liu, Zheng, Li, Mingqin, Liu, Chuanjie, Li, Zengzhong, Majumder, Rangan, Neville, Jennifer, Oakley, Andy, Risvik, Knut Magne, Simhadri, Harsha Vardhan, Varma, Manik, Wang, Yujing, Yang, Linjun, Yang, Mao, Zhang, Ce
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals. In this paper, we introduce MS MARCO Web Search, the first large-scale information-rich web dataset, featuring millions of real clicked
Externí odkaz:
http://arxiv.org/abs/2405.07526
Cross-Modal sponsored search displays multi-modal advertisements (ads) when consumers look for desired products by natural language queries in search engines. Since multi-modal ads bring complementary details for query-ads matching, the ability to al
Externí odkaz:
http://arxiv.org/abs/2309.16141
Autor:
Zhang, Hailin, Wang, Yujing, Chen, Qi, Chang, Ruiheng, Zhang, Ting, Miao, Ziming, Hou, Yingyan, Ding, Yang, Miao, Xupeng, Wang, Haonan, Pang, Bochen, Zhan, Yuefeng, Sun, Hao, Deng, Weiwei, Zhang, Qi, Yang, Fan, Xie, Xing, Yang, Mao, Cui, Bin
Embedding-based retrieval methods construct vector indices to search for document representations that are most similar to the query representations. They are widely used in document retrieval due to low latency and decent recall performance. Recent
Externí odkaz:
http://arxiv.org/abs/2309.13335
Graph Active Learning (GAL), which aims to find the most informative nodes in graphs for annotation to maximize the Graph Neural Networks (GNNs) performance, has attracted many research efforts but remains non-trivial challenges. One major challenge
Externí odkaz:
http://arxiv.org/abs/2308.08823
Autor:
Li, Junyan, Zhang, Li Lyna, Xu, Jiahang, Wang, Yujing, Yan, Shaoguang, Xia, Yunqing, Yang, Yuqing, Cao, Ting, Sun, Hao, Deng, Weiwei, Zhang, Qi, Yang, Mao
Deploying pre-trained transformer models like BERT on downstream tasks in resource-constrained scenarios is challenging due to their high inference cost, which grows rapidly with input sequence length. In this work, we propose a constraint-aware and
Externí odkaz:
http://arxiv.org/abs/2306.14393
Autor:
Li, Rui, Chen, Xu, Li, Chaozhuo, Shen, Yanming, Zhao, Jianan, Wang, Yujing, Han, Weihao, Sun, Hao, Deng, Weiwei, Zhang, Qi, Xie, Xing
Embedding models have shown great power in knowledge graph completion (KGC) task. By learning structural constraints for each training triple, these methods implicitly memorize intrinsic relation rules to infer missing links. However, this paper poin
Externí odkaz:
http://arxiv.org/abs/2305.14126