Zobrazeno 1 - 10
of 58
pro vyhledávání: '"Hazarika, Devamanyu"'
Self-anthropomorphism in robots manifests itself through their display of human-like characteristics in dialogue, such as expressing preferences and emotions. Our study systematically analyzes self-anthropomorphic expression within various dialogue d
Externí odkaz:
http://arxiv.org/abs/2410.03870
Autor:
Jin, Di, Mehri, Shikib, Hazarika, Devamanyu, Padmakumar, Aishwarya, Lee, Sungjin, Liu, Yang, Namazifar, Mahdi
Learning from human feedback is a prominent technique to align the output of large language models (LLMs) with human expectations. Reinforcement learning from human feedback (RLHF) leverages human preference signals that are in the form of ranking of
Externí odkaz:
http://arxiv.org/abs/2311.14543
Autor:
Zhao, Chao, Gella, Spandana, Kim, Seokhwan, Jin, Di, Hazarika, Devamanyu, Papangelis, Alexandros, Hedayatnia, Behnam, Namazifar, Mahdi, Liu, Yang, Hakkani-Tur, Dilek
Task-oriented Dialogue (TOD) Systems aim to build dialogue systems that assist users in accomplishing specific goals, such as booking a hotel or a restaurant. Traditional TODs rely on domain-specific APIs/DBs or external factual knowledge to generate
Externí odkaz:
http://arxiv.org/abs/2305.12091
Autor:
Xu, Yan, Namazifar, Mahdi, Hazarika, Devamanyu, Padmakumar, Aishwarya, Liu, Yang, Hakkani-Tür, Dilek
Large pre-trained language models (PLMs) have been shown to retain implicit knowledge within their parameters. To enhance this implicit knowledge, we propose Knowledge Injection into Language Models (KILM), a novel approach that injects entity-relate
Externí odkaz:
http://arxiv.org/abs/2302.09170
Dot-product attention is a core module in the present generation of neural network models, particularly transformers, and is being leveraged across numerous areas such as natural language processing and computer vision. This attention module is compr
Externí odkaz:
http://arxiv.org/abs/2302.08626
Autor:
Lin, Yen-Ting, Papangelis, Alexandros, Kim, Seokhwan, Lee, Sungjin, Hazarika, Devamanyu, Namazifar, Mahdi, Jin, Di, Liu, Yang, Hakkani-Tur, Dilek
This work focuses on in-context data augmentation for intent detection. Having found that augmentation via in-context prompting of large pre-trained language models (PLMs) alone does not improve performance, we introduce a novel approach based on PLM
Externí odkaz:
http://arxiv.org/abs/2302.05096
Autor:
Meade, Nicholas, Gella, Spandana, Hazarika, Devamanyu, Gupta, Prakhar, Jin, Di, Reddy, Siva, Liu, Yang, Hakkani-Tür, Dilek
While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, which often perp
Externí odkaz:
http://arxiv.org/abs/2302.00871
Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning. Using a large pre-trained language model (PLM), prefix-tuning can obtain strong performance by training only a small
Externí odkaz:
http://arxiv.org/abs/2210.14469
Autor:
Hazarika, Devamanyu, Li, Yingting, Cheng, Bo, Zhao, Shuai, Zimmermann, Roger, Poria, Soujanya
Building robust multimodal models are crucial for achieving reliable deployment in the wild. Despite its importance, less attention has been paid to identifying and improving the robustness of Multimodal Sentiment Analysis (MSA) models. In this work,
Externí odkaz:
http://arxiv.org/abs/2205.15465
Autor:
Kashyap, Abhinav Ramesh, Hazarika, Devamanyu, Kan, Min-Yen, Zimmermann, Roger, Poria, Soujanya
Automatic transfer of text between domains has become popular in recent times. One of its aims is to preserve the semantic content of text being translated from source to target domain. However, it does not explicitly maintain other attributes betwee
Externí odkaz:
http://arxiv.org/abs/2205.04093