Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Lal, Yash Kumar"'
Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks. Traditional automated adversarial data genera
Externí odkaz:
http://arxiv.org/abs/2406.17104
Autor:
Lal, Yash Kumar, Cohen, Vanya, Chambers, Nathanael, Balasubramanian, Niranjan, Mooney, Raymond
Understanding the abilities of LLMs to reason about natural language plans, such as instructional text and recipes, is critical to reliably using them in decision-making systems. A fundamental aspect of plans is the temporal order in which their step
Externí odkaz:
http://arxiv.org/abs/2406.15823
Autor:
Dey, Gourab, Ganesan, Adithya V, Lal, Yash Kumar, Shah, Manal, Sinha, Shreyashee, Matero, Matthew, Giorgi, Salvatore, Kulkarni, Vivek, Schwartz, H. Andrew
Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with the implicit pragmatics from text, often with limited amounts of training data. Instruction tuning has been shown to improve the many capab
Externí odkaz:
http://arxiv.org/abs/2402.01980
Autor:
Lal, Yash Kumar, Zhang, Li, Brahman, Faeze, Majumder, Bodhisattwa Prasad, Clark, Peter, Tandon, Niket
How-to procedures, such as how to plant a garden, are now used by millions of users, but sometimes need customizing to meet a user's specific needs, e.g., planting a garden without pesticides. Our goal is to measure and improve an LLM's ability to pe
Externí odkaz:
http://arxiv.org/abs/2311.09510
We present PaRTE, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across input
Externí odkaz:
http://arxiv.org/abs/2306.16722
Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in a zero-shot setting. However, little is known about their performance on human-level NLP problems which rely on understanding psychological concepts, such as asses
Externí odkaz:
http://arxiv.org/abs/2306.01183
Answering questions about why characters perform certain actions is central to understanding and reasoning about narratives. Despite recent progress in QA, it is not clear if existing models have the ability to answer "why" questions that may require
Externí odkaz:
http://arxiv.org/abs/2106.06132
Autor:
Cao, Qingqing, Lal, Yash Kumar, Trivedi, Harsh, Balasubramanian, Aruna, Balasubramanian, Niranjan
Existing software-based energy measurements of NLP models are not accurate because they do not consider the complex interactions between energy consumption and model execution. We present IrEne, an interpretable and extensible energy prediction syste
Externí odkaz:
http://arxiv.org/abs/2106.01199
Autor:
Kumar, Vaibhav, Dhar, Mrinal, Khattar, Dhruv, Lal, Yash Kumar, Mishra, Abhimanshu, Shrivastava, Manish, Varma, Vasudeva
Publikováno v:
"SWDE : A Sub-Word And Document Embedding Based Engine for Clickbait Detection". In Proceedings of SIGIR 2018 Workshop on Computational Surprise in Information Retrieval, Ann Arbor, MI, USA, July 8-12 (CompS'18, SIGIR), 4 pages
In order to expand their reach and increase website ad revenue, media outlets have started using clickbait techniques to lure readers to click on articles on their digital platform. Having successfully enticed the user to open the article, the articl
Externí odkaz:
http://arxiv.org/abs/1808.00957
Publikováno v:
"Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks". In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval 2018. Pages: 1225-1228
Online media outlets, in a bid to expand their reach and subsequently increase revenue through ad monetisation, have begun adopting clickbait techniques to lure readers to click on articles. The article fails to fulfill the promise made by the headli
Externí odkaz:
http://arxiv.org/abs/1710.01507