Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Malkiel, Itzik"'
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adop
Externí odkaz:
http://arxiv.org/abs/2403.02889
Efficient Discovery and Effective Evaluation of Visual Perceptual Similarity: A Benchmark and Beyond
Autor:
Barkan, Oren, Reiss, Tal, Weill, Jonathan, Katz, Ori, Hirsch, Roy, Malkiel, Itzik, Koenigstein, Noam
Visual similarities discovery (VSD) is an important task with broad e-commerce applications. Given an image of a certain object, the goal of VSD is to retrieve images of different objects with high perceptual visual similarity. Although being a highl
Externí odkaz:
http://arxiv.org/abs/2308.14753
Autor:
Barkan, Oren, Caciularu, Avi, Rejwan, Idan, Katz, Ori, Weill, Jonathan, Malkiel, Itzik, Koenigstein, Noam
We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is
Externí odkaz:
http://arxiv.org/abs/2306.16326
Autor:
Malkiel, Itzik, Alon, Uri, Yehuda, Yakir, Keren, Shahar, Barkan, Oren, Ronen, Royi, Koenigstein, Noam
Transcriptions of phone calls are of significant value across diverse fields, such as sales, customer service, healthcare, and law enforcement. Nevertheless, the analysis of these recorded conversations can be an arduous and time-intensive process, e
Externí odkaz:
http://arxiv.org/abs/2306.07941
Autor:
Malkiel, Itzik, Ginzburg, Dvir, Barkan, Oren, Caciularu, Avi, Weill, Jonathan, Koenigstein, Noam
Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity. Despite significant progress in the field, the explanations for similari
Externí odkaz:
http://arxiv.org/abs/2208.06612
We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task. We focus on downstream tasks of learning similarities for recommenda
Externí odkaz:
http://arxiv.org/abs/2208.06610
Autor:
Barkan, Oren, Hauon, Edan, Caciularu, Avi, Katz, Ori, Malkiel, Itzik, Armstrong, Omri, Koenigstein, Noam
Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks. As this revolution continues, the ability to explain model predictions has become a major area of interest for the NLP community. In this work, we
Externí odkaz:
http://arxiv.org/abs/2204.11073
We present TFF, which is a Transformer framework for the analysis of functional Magnetic Resonance Imaging (fMRI) data. TFF employs a two-phase training approach. First, self-supervised training is applied to a collection of fMRI scans, where the mod
Externí odkaz:
http://arxiv.org/abs/2112.05761
The recently introduced hateful meme challenge demonstrates the difficulty of determining whether a meme is hateful or not. Specifically, both unimodal language models and multimodal vision-language models cannot reach the human level of performance.
Externí odkaz:
http://arxiv.org/abs/2109.10649
Autor:
Barkan, Oren, Armstrong, Omri, Hertz, Amir, Caciularu, Avi, Katz, Ori, Malkiel, Itzik, Koenigstein, Noam
We present Gradient Activation Maps (GAM) - a machinery for explaining predictions made by visual similarity and classification models. By gleaning localized gradient and activation information from multiple network layers, GAM offers improved visual
Externí odkaz:
http://arxiv.org/abs/2109.00951