Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Fröbe, Maik"'
Autor:
Thakur, Nandan, Bonifacio, Luiz, Fröbe, Maik, Bondarenko, Alexander, Kamalloo, Ehsan, Potthast, Martin, Hagen, Matthias, Lin, Jimmy
The zero-shot effectiveness of neural retrieval models is often evaluated on the BEIR benchmark -- a combination of different IR evaluation datasets. Interestingly, previous studies found that particularly on the BEIR subset Touch\'e 2020, an argumen
Externí odkaz:
http://arxiv.org/abs/2407.07790
Autor:
Schlatt, Ferdinand, Fröbe, Maik, Scells, Harrisen, Zhuang, Shengyao, Koopman, Bevan, Zuccon, Guido, Stein, Benno, Potthast, Martin, Hagen, Matthias
Cross-encoders distilled from large language models (LLMs) are often more effective re-rankers than cross-encoders fine-tuned on manually labeled data. However, the distilled models usually do not reach their teacher LLM's effectiveness. To investiga
Externí odkaz:
http://arxiv.org/abs/2405.07920
Autor:
Schlatt, Ferdinand, Fröbe, Maik, Scells, Harrisen, Zhuang, Shengyao, Koopman, Bevan, Zuccon, Guido, Stein, Benno, Potthast, Martin, Hagen, Matthias
Existing cross-encoder re-rankers can be categorized as pointwise, pairwise, or listwise models. Pair- and listwise models allow passage interactions, which usually makes them more effective than pointwise models but also less efficient and less robu
Externí odkaz:
http://arxiv.org/abs/2404.06912
Modern sequence-to-sequence relevance models like monoT5 can effectively capture complex textual interactions between queries and documents through cross-encoding. However, the use of natural language tokens in prompts, such as Query, Document, and R
Externí odkaz:
http://arxiv.org/abs/2403.07654
Cross-encoders are effective passage and document re-rankers but less efficient than other neural or classic retrieval models. A few previous studies have applied windowed self-attention to make cross-encoders more efficient. However, these studies d
Externí odkaz:
http://arxiv.org/abs/2312.17649
Autor:
Gienapp, Lukas, Scells, Harrisen, Deckers, Niklas, Bevendorff, Janek, Wang, Shuai, Kiesel, Johannes, Syed, Shahbaz, Fröbe, Maik, Zuccon, Guido, Stein, Benno, Hagen, Matthias, Potthast, Martin
Recent advances in large language models have enabled the development of viable generative retrieval systems. Instead of a traditional document ranking, generative retrieval systems often directly return a grounded generated text as a response to a q
Externí odkaz:
http://arxiv.org/abs/2311.04694
Autor:
Fröbe, Maik, Reimer, Jan Heinrich, MacAvaney, Sean, Deckers, Niklas, Reich, Simon, Bevendorff, Janek, Stein, Benno, Hagen, Matthias, Potthast, Martin
We integrate ir_datasets, ir_measures, and PyTerrier with TIRA in the Information Retrieval Experiment Platform (TIREx) to promote more standardized, reproducible, scalable, and even blinded retrieval experiments. Standardization is achieved when a r
Externí odkaz:
http://arxiv.org/abs/2305.18932
Autor:
Reimer, Jan Heinrich, Schmidt, Sebastian, Fröbe, Maik, Gienapp, Lukas, Scells, Harrisen, Stein, Benno, Hagen, Matthias, Potthast, Martin
The Archive Query Log (AQL) is a previously unused, comprehensive query log collected at the Internet Archive over the last 25 years. Its first version includes 356 million queries, 166 million search result pages, and 1.7 billion search results acro
Externí odkaz:
http://arxiv.org/abs/2304.00413
Autor:
Deckers, Niklas, Fröbe, Maik, Kiesel, Johannes, Pandolfo, Gianluca, Schröder, Christopher, Stein, Benno, Potthast, Martin
Conditional generative models such as DALL-E and Stable Diffusion generate images based on a user-defined text, the prompt. Finding and refining prompts that produce a desired image has become the art of prompt engineering. Generative models do not p
Externí odkaz:
http://arxiv.org/abs/2212.07476
Pairwise re-ranking models predict which of two documents is more relevant to a query and then aggregate a final ranking from such preferences. This is often more effective than pointwise re-ranking models that directly predict a relevance value for
Externí odkaz:
http://arxiv.org/abs/2207.04470