Zobrazeno 1 - 10
of 23 272
pro vyhledávání: '"NLU"'
Real-time conversational AI agents face challenges in performing Natural Language Understanding (NLU) in dynamic, outdoor environments like automated drive-thru systems. These settings require NLU models to handle background noise, diverse accents, a
Externí odkaz:
http://arxiv.org/abs/2411.15372
BioMistral-NLU: Towards More Generalizable Medical Language Understanding through Instruction Tuning
Autor:
Fu, Yujuan Velvin, Ramachandran, Giridhar Kaushik, Park, Namu, Lybarger, Kevin, Xia, Fei, Uzuner, Ozlem, Yetisgen, Meliha
Large language models (LLMs) such as ChatGPT are fine-tuned on large and diverse instruction-following corpora, and can generalize to new tasks. However, those instruction-tuned LLMs often perform poorly in specialized medical natural language unders
Externí odkaz:
http://arxiv.org/abs/2410.18955
Text generation is the automated process of producing written or spoken language using computational methods. It involves generating coherent and contextually relevant text based on predefined rules or learned patterns. However, challenges in text ge
Externí odkaz:
http://arxiv.org/abs/2410.13498
This paper explores hate speech detection in Devanagari-scripted languages, focusing on Hindi and Nepali, for Subtask B of the CHIPSAL@COLING 2025 Shared Task. Using a range of transformer-based models such as XLM-RoBERTa, MURIL, and IndicBERT, we ex
Externí odkaz:
http://arxiv.org/abs/2412.08163
Autor:
Purbey, Jebish, Pullakhandam, Siddartha, Mehreen, Kanwal, Arham, Muhammad, Sharma, Drishti, Srivastava, Ashay, Kadiyala, Ram Mohan Rao
This paper presents a detailed system description of our entry for the CHiPSAL 2025 shared task, focusing on language detection, hate speech identification, and target detection in Devanagari script languages. We experimented with a combination of la
Externí odkaz:
http://arxiv.org/abs/2411.06850
Autor:
Liu, Chengyuan, Wang, Shihang, Zhao, Fubang, Kuang, Kun, Kang, Yangyang, Lu, Weiming, Sun, Changlong, Wu, Fei
Information Extraction (IE) and Text Classification (CLS) serve as the fundamental pillars of NLU, with both disciplines relying on analyzing input sequences to categorize outputs into pre-established schemas. However, there is no existing encoder-ba
Externí odkaz:
http://arxiv.org/abs/2409.05275
Detecting biases in natural language understanding (NLU) for African American Vernacular English (AAVE) is crucial to developing inclusive natural language processing (NLP) systems. To address dialect-induced performance discrepancies, we introduce A
Externí odkaz:
http://arxiv.org/abs/2408.14845
Although Large Language Models(LLMs) can generate coherent and contextually relevant text, they often struggle to recognise the intent behind the human user's query. Natural Language Understanding (NLU) models, however, interpret the purpose and key
Externí odkaz:
http://arxiv.org/abs/2408.08144
Autor:
SadraeiJavaheri, MohammadAli, Moghaddaszadeh, Ali, Molazadeh, Milad, Naeiji, Fariba, Aghababaloo, Farnaz, Rafiee, Hamideh, Amirmahani, Zahra, Abedini, Tohid, Sheikhi, Fatemeh Zahra, Salehoof, Amirmohammad
The field of natural language processing (NLP) has seen remarkable advancements, thanks to the power of deep learning and foundation models. Language models, and specifically BERT, have been key players in this progress. In this study, we trained and
Externí odkaz:
http://arxiv.org/abs/2407.16382
This paper explores the performance of encoder and decoder language models on multilingual Natural Language Understanding (NLU) tasks, with a broad focus on Germanic languages. Building upon the ScandEval benchmark, which initially was restricted to
Externí odkaz:
http://arxiv.org/abs/2406.13469