Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Halfon, Alon"'
Autor:
Ein-Dor, Liat, Toledo-Ronen, Orith, Spector, Artem, Gretz, Shai, Dankin, Lena, Halfon, Alon, Katz, Yoav, Slonim, Noam
Prompts are how humans communicate with LLMs. Informative prompts are essential for guiding LLMs to produce the desired output. However, prompt engineering is often tedious and time-consuming, requiring significant expertise, limiting its widespread
Externí odkaz:
http://arxiv.org/abs/2408.04560
Autor:
Halfon, Alon, Gretz, Shai, Arviv, Ofir, Spector, Artem, Toledo-Ronen, Orith, Katz, Yoav, Ein-Dor, Liat, Shmueli-Scheuer, Michal, Slonim, Noam
Fine-tuning Large Language Models (LLMs) is an effective method to enhance their performance on downstream tasks. However, choosing the appropriate setting of tuning hyperparameters (HPs) is a labor-intensive and computationally expensive process. He
Externí odkaz:
http://arxiv.org/abs/2407.18990
Recent advances in large pretrained language models have increased attention to zero-shot text classification. In particular, models finetuned on natural language inference datasets have been widely adopted as zero-shot classifiers due to their promi
Externí odkaz:
http://arxiv.org/abs/2210.17541
Autor:
Shnarch, Eyal, Halfon, Alon, Gera, Ariel, Danilevsky, Marina, Katsis, Yannis, Choshen, Leshem, Cooper, Martin Santillan, Epelboim, Dina, Zhang, Zheng, Wang, Dakuo, Yip, Lucy, Ein-Dor, Liat, Dankin, Lena, Shnayderman, Ilya, Aharonov, Ranit, Li, Yunyao, Liberman, Naftali, Slesarev, Philip Levin, Newton, Gwilym, Ofek-Koifman, Shila, Slonim, Noam, Katz, Yoav
Text classification can be useful in many real-world scenarios, saving a lot of time for end users. However, building a custom classifier typically requires coding skills and ML knowledge, which poses a significant barrier for many potential users. T
Externí odkaz:
http://arxiv.org/abs/2208.01483
Autor:
Shnarch, Eyal, Gera, Ariel, Halfon, Alon, Dankin, Lena, Choshen, Leshem, Aharonov, Ranit, Slonim, Noam
In real-world scenarios, a text classification task often begins with a cold start, when labeled data is scarce. In such cases, the common practice of fine-tuning pre-trained models, such as BERT, for a target classification task, is prone to produce
Externí odkaz:
http://arxiv.org/abs/2203.10581
Autor:
Ein-Dor, Liat, Gera, Ariel, Toledo-Ronen, Orith, Halfon, Alon, Sznajder, Benjamin, Dankin, Lena, Bilu, Yonatan, Katz, Yoav, Slonim, Noam
Extraction of financial and economic events from text has previously been done mostly using rule-based methods, with more recent works employing machine learning techniques. This work is in line with this latter approach, leveraging relevant Wikipedi
Externí odkaz:
http://arxiv.org/abs/1911.10783
Autor:
Ein-Dor, Liat, Shnarch, Eyal, Dankin, Lena, Halfon, Alon, Sznajder, Benjamin, Gera, Ariel, Alzate, Carlos, Gleize, Martin, Choshen, Leshem, Hou, Yufang, Bilu, Yonatan, Aharonov, Ranit, Slonim, Noam
Publikováno v:
AAAI 2020
One of the main tasks in argument mining is the retrieval of argumentative content pertaining to a given topic. Most previous work addressed this task by retrieving a relatively small number of relevant documents as the initial source for such conten
Externí odkaz:
http://arxiv.org/abs/1911.10763
Autor:
Shnayderman, Ilya, Ein-Dor, Liat, Mass, Yosi, Halfon, Alon, Sznajder, Benjamin, Spector, Artem, Katz, Yoav, Sheinwald, Dafna, Aharonov, Ranit, Slonim, Noam
Wikification of large corpora is beneficial for various NLP applications. Existing methods focus on quality performance rather than run-time, and are therefore non-feasible for large data. Here, we introduce RedW, a run-time oriented Wikification sol
Externí odkaz:
http://arxiv.org/abs/1908.06785
Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected
Externí odkaz:
http://arxiv.org/abs/1904.00669
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.