Zobrazeno 1 - 10
of 348
pro vyhledávání: '"zhao, Xinran"'
Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios. However, class-specific properties are overlooked for task setup and learning. How will these properties influence
Externí odkaz:
http://arxiv.org/abs/2407.12512
Autor:
Cai, Fengyu, Zhao, Xinran, Chen, Tong, Chen, Sihao, Zhang, Hongming, Gurevych, Iryna, Koeppl, Heinz
Recent studies show the growing significance of document retrieval in the generation of LLMs, i.e., RAG, within the scientific domain by bridging their knowledge gap. However, dense retrievers often struggle with domain-specific retrieval and complex
Externí odkaz:
http://arxiv.org/abs/2407.10691
Autor:
Zhao, Xinran, Dai, Lin
With the ever-growing demand for low-latency services in machine-to-machine (M2M) communications, the delay performance of random access networks has become a primary concern, which critically depends on the sensing capability of nodes. To understand
Externí odkaz:
http://arxiv.org/abs/2406.02999
The task of Information Retrieval (IR) requires a system to identify relevant documents based on users' information needs. In real-world scenarios, retrievers are expected to not only rely on the semantic relevance between the documents and the queri
Externí odkaz:
http://arxiv.org/abs/2405.02714
Autor:
Zhao, Xinran, Zhang, Hongming, Pan, Xiaoman, Yao, Wenlin, Yu, Dong, Wu, Tongshuang, Chen, Jianshu
Publikováno v:
Findings of the Association for Computational Linguistics ACL 2024
For a LLM to be trustworthy, its confidence level should be well-calibrated with its actual performance. While it is now common sense that LLM performances are greatly impacted by prompts, the confidence calibration in prompting LLMs has yet to be th
Externí odkaz:
http://arxiv.org/abs/2402.17124
Autor:
Chen, Tong, Wang, Hongwei, Chen, Sihao, Yu, Wenhao, Ma, Kaixin, Zhao, Xinran, Zhang, Hongming, Yu, Dong
Dense retrieval has become a prominent method to obtain relevant context or world knowledge in open-domain NLP tasks. When we use a learned dense retriever on a retrieval corpus at inference time, an often-overlooked design choice is the retrieval un
Externí odkaz:
http://arxiv.org/abs/2312.06648
Large language models (LLMs) acquire extensive knowledge during pre-training, known as their parametric knowledge. However, in order to remain up-to-date and align with human instructions, LLMs inevitably require external knowledge during their inter
Externí odkaz:
http://arxiv.org/abs/2309.08594
Although large-scale pre-trained language models (PTLMs) are shown to encode rich knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or static, making external knowledge necessary. However, the existing information ret
Externí odkaz:
http://arxiv.org/abs/2307.10442
While advances in pre-training have led to dramatic improvements in few-shot learning of NLP tasks, there is limited understanding of what drives successful few-shot adaptation in datasets. In particular, given a new dataset and a pre-trained model,
Externí odkaz:
http://arxiv.org/abs/2211.09113
Publikováno v:
Soldering & Surface Mount Technology, 2023, Vol. 36, Issue 2, pp. 93-100.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/SSMT-08-2023-0051