Zobrazeno 1 - 10
of 618
pro vyhledávání: '"YANG Linjun"'
Publikováno v:
Zhejiang dianli, Vol 41, Iss 6, Pp 92-96 (2022)
The emitted SO3 from coal-fired flue gas into the atmosphere imperils the ecological environment and human health. The SO3 emission control now is a central issue that draws common attention. Therefore, the paper analyzes research progress and shor
Externí odkaz:
https://doaj.org/article/0e743150d6e348eb99c4e3a796d48d4e
Autor:
Yang, Sheng, Wu, Yurong, Gao, Yan, Zhou, Zineng, Zhu, Bin Benjamin, Sun, Xiaodi, Lou, Jian-Guang, Ding, Zhiming, Hu, Anbang, Fang, Yuan, Li, Yunsong, Chen, Junyan, Yang, Linjun
Prompt engineering is very important to enhance the performance of large language models (LLMs). When dealing with complex issues, prompt engineers tend to distill multiple patterns from examples and inject relevant solutions to optimize the prompts,
Externí odkaz:
http://arxiv.org/abs/2410.08696
Autor:
Wu, Yurong, Gao, Yan, Zhu, Bin Benjamin, Zhou, Zineng, Sun, Xiaodi, Yang, Sheng, Lou, Jian-Guang, Ding, Zhiming, Yang, Linjun
Prompt engineering is pivotal for harnessing the capabilities of large language models (LLMs) across diverse applications. While existing prompt optimization methods improve prompt effectiveness, they often lead to prompt drifting, where newly genera
Externí odkaz:
http://arxiv.org/abs/2410.08601
Autor:
Chen, Qi, Geng, Xiubo, Rosset, Corby, Buractaon, Carolyn, Lu, Jingwen, Shen, Tao, Zhou, Kun, Xiong, Chenyan, Gong, Yeyun, Bennett, Paul, Craswell, Nick, Xie, Xing, Yang, Fan, Tower, Bryan, Rao, Nikhil, Dong, Anlei, Jiang, Wenqi, Liu, Zheng, Li, Mingqin, Liu, Chuanjie, Li, Zengzhong, Majumder, Rangan, Neville, Jennifer, Oakley, Andy, Risvik, Knut Magne, Simhadri, Harsha Vardhan, Varma, Manik, Wang, Yujing, Yang, Linjun, Yang, Mao, Zhang, Ce
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals. In this paper, we introduce MS MARCO Web Search, the first large-scale information-rich web dataset, featuring millions of real clicked
Externí odkaz:
http://arxiv.org/abs/2405.07526
This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a b
Externí odkaz:
http://arxiv.org/abs/2402.05672
In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billio
Externí odkaz:
http://arxiv.org/abs/2401.00368
Modern search engines are built on a stack of different components, including query understanding, retrieval, multi-stage ranking, and question answering, among others. These components are often optimized and deployed independently. In this paper, w
Externí odkaz:
http://arxiv.org/abs/2310.14587
With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true. However, this kind of metho
Externí odkaz:
http://arxiv.org/abs/2305.14766
Autor:
Yang, Nan, Ge, Tao, Wang, Liang, Jiao, Binxing, Jiang, Daxin, Yang, Linjun, Majumder, Rangan, Wei, Furu
We propose LLMA, an LLM accelerator to losslessly speed up Large Language Model (LLM) inference with references. LLMA is motivated by the observation that there are abundant identical text spans between the decoding result by an LLM and the reference
Externí odkaz:
http://arxiv.org/abs/2304.04487
Autor:
Sun, Hao, Liu, Xiao, Gong, Yeyun, Dong, Anlei, Lu, Jingwen, Zhang, Yan, Yang, Linjun, Majumder, Rangan, Duan, Nan
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional methods include response-based methods and feature-based methods. Response-based methods are widely used but suffer
Externí odkaz:
http://arxiv.org/abs/2212.05225