Zobrazeno 1 - 10
of 183
pro vyhledávání: '"Nardini, Franco Maria"'
Autor:
Busolin, Francesco, Lucchese, Claudio, Nardini, Franco Maria, Orlando, Salvatore, Perego, Raffaele, Trani, Salvatore
Learned dense representations are a popular family of techniques for encoding queries and documents using high-dimensional embeddings, which enable retrieval by performing approximate k nearest-neighbors search (A-kNN). A popular technique for making
Externí odkaz:
http://arxiv.org/abs/2408.04981
Learned sparse representations form an effective and interpretable class of embeddings for text retrieval. While exact top-k retrieval over such embeddings faces efficiency challenges, a recent algorithm called Seismic has enabled remarkably fast, hi
Externí odkaz:
http://arxiv.org/abs/2408.04443
Clustering-based nearest neighbor search is a simple yet effective method in which data points are partitioned into geometric shards to form an index, and only a few shards are searched during query processing to find an approximate set of top-$k$ ve
Externí odkaz:
http://arxiv.org/abs/2405.12207
Learned sparse representations form an attractive class of contextual embeddings for text retrieval. That is so because they are effective models of relevance and are interpretable by design. Despite their apparent compatibility with inverted indexes
Externí odkaz:
http://arxiv.org/abs/2404.18812
A critical piece of the modern information retrieval puzzle is approximate nearest neighbor search. Its objective is to return a set of $k$ data points that are closest to a query point, with its accuracy measured by the proportion of exact nearest n
Externí odkaz:
http://arxiv.org/abs/2404.11731
Dense retrieval techniques employ pre-trained large language models to build a high-dimensional representation of queries and passages. These representations compute the relevance of a passage w.r.t. to a query using efficient similarity measures. In
Externí odkaz:
http://arxiv.org/abs/2404.02805
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and d
Externí odkaz:
http://arxiv.org/abs/2309.09013
Autor:
Paparella, Vincenzo, Anelli, Vito Walter, Nardini, Franco Maria, Perego, Raffaele, Di Noia, Tommaso
Information Retrieval (IR) and Recommender Systems (RS) tasks are moving from computing a ranking of final results based on a single metric to multi-objective problems. Solving these problems leads to a set of Pareto-optimal solutions, known as Paret
Externí odkaz:
http://arxiv.org/abs/2306.12165
Quantization and pruning are two effective Deep Neural Networks model compression methods. In this paper, we propose Automatic Prune Binarization (APB), a novel compression technique combining quantization with pruning. APB enhances the representatio
Externí odkaz:
http://arxiv.org/abs/2306.08960
This monograph takes a step towards promoting the study of efficiency in the era of neural information retrieval by offering a comprehensive survey of the literature on efficiency and effectiveness in ranking, and to a limited extent, retrieval. This
Externí odkaz:
http://arxiv.org/abs/2305.08680