Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Deng, Denvy"'
Autor:
Yang, Yaming, Muhtar, Dilxat, Shen, Yelong, Zhan, Yuefeng, Liu, Jianfeng, Wang, Yujing, Sun, Hao, Deng, Denvy, Sun, Feng, Zhang, Qi, Chen, Weizhu, Tong, Yunhai
Parameter-efficient fine-tuning (PEFT) has been widely employed for domain adaptation, with LoRA being one of the most prominent methods due to its simplicity and effectiveness. However, in multi-task learning (MTL) scenarios, LoRA tends to obscure t
Externí odkaz:
http://arxiv.org/abs/2410.09437
Autor:
Yin, Jun, Zeng, Zhengxin, Li, Mingzheng, Yan, Hao, Li, Chaozhuo, Han, Weihao, Zhang, Jianjin, Liu, Ruochen, Sun, Allen, Deng, Denvy, Sun, Feng, Zhang, Qi, Pan, Shirui, Wang, Senzhang
Owing to the unprecedented capability in semantic understanding and logical reasoning, the pre-trained large language models (LLMs) have shown fantastic potential in developing the next-generation recommender systems (RSs). However, the static index
Externí odkaz:
http://arxiv.org/abs/2409.09253
Autor:
Liu, Yi, Tian, Yuan, Lian, Jianxun, Wang, Xinlong, Cao, Yanan, Fang, Fang, Zhang, Wen, Huang, Haizhen, Deng, Denvy, Zhang, Qi
Dense retrieval is widely used for entity linking to retrieve entities from large-scale knowledge bases. Mainstream techniques are based on a dual-encoder framework, which encodes mentions and entities independently and calculates their relevances vi
Externí odkaz:
http://arxiv.org/abs/2305.17371
Autor:
Cheng, Daixuan, Huang, Shaohan, Bi, Junyu, Zhan, Yuefeng, Liu, Jianfeng, Wang, Yujing, Sun, Hao, Wei, Furu, Deng, Denvy, Zhang, Qi
Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for Improving zero
Externí odkaz:
http://arxiv.org/abs/2303.08518
Autor:
Tian, Zhoujin, Li, Chaozhuo, Ren, Shuo, Zuo, Zhiqiang, Wen, Zengxuan, Hu, Xinyue, Han, Xiao, Huang, Haizhen, Deng, Denvy, Zhang, Qi, Xie, Xing
Bilingual lexicon induction induces the word translations by aligning independently trained word embeddings in two languages. Existing approaches generally focus on minimizing the distances between words in the aligned pairs, while suffering from low
Externí odkaz:
http://arxiv.org/abs/2210.09926
Autor:
Xiao, Shitao, Liu, Zheng, Han, Weihao, Zhang, Jianjin, Lian, Defu, Gong, Yeyun, Chen, Qi, Yang, Fan, Sun, Hao, Shao, Yingxia, Deng, Denvy, Zhang, Qi, Xie, Xing
Vector quantization (VQ) based ANN indexes, such as Inverted File System (IVF) and Product Quantization (PQ), have been widely applied to embedding based document retrieval thanks to the competitive time and memory efficiency. Originally, VQ is learn
Externí odkaz:
http://arxiv.org/abs/2204.00185
Autor:
Zhang, Jianjin, Liu, Zheng, Han, Weihao, Xiao, Shitao, Zheng, Ruicheng, Shao, Yingxia, Sun, Hao, Zhu, Hanqing, Srinivasan, Premkumar, Deng, Denvy, Zhang, Qi, Xie, Xing
Embedding based retrieval (EBR) is a fundamental building block in many web applications. However, EBR in sponsored search is distinguished from other generic scenarios and technically challenging due to the need of serving multiple retrieval purpose
Externí odkaz:
http://arxiv.org/abs/2202.06212
Autor:
Xiao, Shitao, Liu, Zheng, Han, Weihao, Zhang, Jianjin, Shao, Yingxia, Lian, Defu, Li, Chaozhuo, Sun, Hao, Deng, Denvy, Zhang, Liangjie, Zhang, Qi, Xie, Xing
Ad-hoc search calls for the selection of appropriate answers from a massive-scale corpus. Nowadays, the embedding-based retrieval (EBR) becomes a promising solution, where deep learning based document representation and ANN search techniques are alli
Externí odkaz:
http://arxiv.org/abs/2201.05409
Autor:
Jiang, Ting, Jiao, Jian, Huang, Shaohan, Zhang, Zihan, Wang, Deqing, Zhuang, Fuzhen, Wei, Furu, Huang, Haizhen, Deng, Denvy, Zhang, Qi
We propose PromptBERT, a novel contrastive learning method for learning better sentence representation. We firstly analyze the drawback of current sentence embedding from original BERT and find that it is mainly due to the static token embedding bias
Externí odkaz:
http://arxiv.org/abs/2201.04337