Zobrazeno 1 - 10
of 189
pro vyhledávání: '"ALIANNEJADI, MOHAMMAD"'
Generating diverse and effective clarifying questions is crucial for improving query understanding and retrieval performance in open-domain conversational search (CS) systems. We propose AGENT-CQ (Automatic GENeration, and evaluaTion of Clarifying Qu
Externí odkaz:
http://arxiv.org/abs/2410.19692
Conversational Search (CS) is the task of retrieving relevant documents from a corpus within a conversational context, combining retrieval with conversational context modeling. With the explosion of Large Language Models (LLMs), the CS field has seen
Externí odkaz:
http://arxiv.org/abs/2410.14609
Users post numerous product-related questions on e-commerce platforms, affecting their purchase decisions. Product-related question answering (PQA) entails utilizing product-related resources to provide precise responses to users. We propose a novel
Externí odkaz:
http://arxiv.org/abs/2409.16025
Autor:
Zhang, Weijia, Aliannejadi, Mohammad, Pei, Jiahuan, Yuan, Yifei, Huang, Jia-Hong, Kanoulas, Evangelos
Large language models (LLMs) often generate content with unsupported or unverifiable content, known as "hallucinations." To address this, retrieval-augmented LLMs are employed to include citations in their content, grounding the content in verifiable
Externí odkaz:
http://arxiv.org/abs/2408.12398
Autor:
Rahmani, Hossein A., Yilmaz, Emine, Craswell, Nick, Mitra, Bhaskar, Thomas, Paul, Clarke, Charles L. A., Aliannejadi, Mohammad, Siro, Clemencia, Faggioli, Guglielmo
The LLMJudge challenge is organized as part of the LLM4Eval workshop at SIGIR 2024. Test collections are essential for evaluating information retrieval (IR) systems. The evaluation and tuning of a search system is largely based on relevance labels, w
Externí odkaz:
http://arxiv.org/abs/2408.08896
Autor:
Rahmani, Hossein A., Siro, Clemencia, Aliannejadi, Mohammad, Craswell, Nick, Clarke, Charles L. A., Faggioli, Guglielmo, Mitra, Bhaskar, Thomas, Paul, Yilmaz, Emine
The first edition of the workshop on Large Language Model for Evaluation in Information Retrieval (LLM4Eval 2024) took place in July 2024, co-located with the ACM SIGIR Conference 2024 in the USA (SIGIR 2024). The aim was to bring information retriev
Externí odkaz:
http://arxiv.org/abs/2408.05388
Autor:
Askari, Arian, Meng, Chuan, Aliannejadi, Mohammad, Ren, Zhaochun, Kanoulas, Evangelos, Verberne, Suzan
Existing generative retrieval (GR) approaches rely on training-based indexing, i.e., fine-tuning a model to memorise the associations between a query and the document identifier (docid) of a relevant document. Training-based indexing has three limita
Externí odkaz:
http://arxiv.org/abs/2408.02152
At its core, information access and seeking is an interactive process. In existing search engines, interactions are limited to a few pre-defined actions, such as "requery", "click on a document", "scrolling up/down", "going to the next result page",
Externí odkaz:
http://arxiv.org/abs/2407.11605
Autor:
Zhang, Weijia, Aliannejadi, Mohammad, Yuan, Yifei, Pei, Jiahuan, Huang, Jia-Hong, Kanoulas, Evangelos
Large language models (LLMs) often produce unsupported or unverifiable content, known as "hallucinations." To mitigate this, retrieval-augmented LLMs incorporate citations, grounding the content in verifiable sources. Despite such developments, manua
Externí odkaz:
http://arxiv.org/abs/2406.15264
Incomplete relevance judgments limit the re-usability of test collections. When new systems are compared against previous systems used to build the pool of judged documents, they often do so at a disadvantage due to the ``holes'' in test collection (
Externí odkaz:
http://arxiv.org/abs/2405.05600