Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Menon, Rakesh R."'
Autor:
Sadat, Mobashir, Zhou, Zhengyu, Lange, Lukas, Araki, Jun, Gundroo, Arsalan, Wang, Bingqing, Menon, Rakesh R, Parvez, Md Rizwan, Feng, Zhe
Hallucination is a well-known phenomenon in text generated by large language models (LLMs). The existence of hallucinatory responses is found in almost all application scenarios e.g., summarization, question-answering (QA) etc. For applications requi
Externí odkaz:
http://arxiv.org/abs/2312.05200
Recent approaches have explored language-guided classifiers capable of classifying examples from novel tasks when provided with task-specific natural language explanations, instructions or prompts (Sanh et al., 2022; R. Menon et al., 2022). While the
Externí odkaz:
http://arxiv.org/abs/2311.07538
Generalized quantifiers (e.g., few, most) are used to indicate the proportions predicates are satisfied (for example, some apples are red). One way to interpret quantifier semantics is to explicitly bind these satisfactions with percentage scopes (e.
Externí odkaz:
http://arxiv.org/abs/2311.04659
Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME, generate algorithmic explanations by attributing impor
Externí odkaz:
http://arxiv.org/abs/2305.12995
A hallmark of human intelligence is the ability to learn new concepts purely from language. Several recent approaches have explored training machine learning models via natural language supervision. However, these approaches fall short in leveraging
Externí odkaz:
http://arxiv.org/abs/2212.09104
Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task. In contrast, humans have the ability to learn new concepts from language. Here, we explore training zero-shot classifiers for structured data
Externí odkaz:
http://arxiv.org/abs/2204.07142
Recently, pre-trained language models (LMs) have achieved strong performance when fine-tuned on difficult benchmarks like SuperGLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploitin
Externí odkaz:
http://arxiv.org/abs/2103.11955
Autor:
Menon, Rakesh R., Ravi, V.
Publikováno v:
Journal of Modelling in Management, 2021, Vol. 17, Issue 4, pp. 1319-1350.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/JM2-02-2021-0042
Autor:
Menon, Rakesh R., Ravi, V.
Publikováno v:
In Cleaner Materials September 2022 5
Autor:
Menon, Rakesh R, Ravindran, Balaraman
Deep Reinforcement Learning has been able to achieve amazing successes in a variety of domains from video games to continuous control by trying to maximize the cumulative reward. However, most of these successes rely on algorithms that require a larg
Externí odkaz:
http://arxiv.org/abs/1709.04909