Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Ramaneswaran S"'
Autor:
Ghosh, Sreyan, Tyagi, Utkarsh, Kumar, Sonal, Evuru, C. K., Ramaneswaran, S, Sakshi, S, Manocha, Dinesh
We present ABEX, a novel and effective generative data augmentation methodology for low-resource Natural Language Understanding (NLU) tasks. ABEX is based on ABstract-and-EXpand, a novel paradigm for generating diverse forms of an input document -- w
Externí odkaz:
http://arxiv.org/abs/2406.04286
Autor:
Ghosh, Sreyan, Evuru, Chandra Kiran, Kumar, Sonal, Ramaneswaran, S, Sakshi, S, Tyagi, Utkarsh, Manocha, Dinesh
We present DALE, a novel and effective generative Data Augmentation framework for low-resource LEgal NLP. DALE addresses the challenges existing frameworks pose in generating effective data augmentations of legal documents - legal language, with its
Externí odkaz:
http://arxiv.org/abs/2310.15799
Autor:
Ghosh, Sreyan, Seth, Ashish, Kumar, Sonal, Tyagi, Utkarsh, Evuru, Chandra Kiran, Ramaneswaran, S., Sakshi, S., Nieto, Oriol, Duraiswami, Ramani, Manocha, Dinesh
A fundamental characteristic of audio is its compositional nature. Audio-language models (ALMs) trained using a contrastive approach (e.g., CLAP) that learns a shared representation between audio and language modalities have improved performance in m
Externí odkaz:
http://arxiv.org/abs/2310.08753
ACLM: A Selective-Denoising based Generative Data Augmentation Approach for Low-Resource Complex NER
Complex Named Entity Recognition (NER) is the task of detecting linguistically complex named entities in low-context text. In this paper, we present ACLM Attention-map aware keyword selection for Conditional Language Model fine-tuning), a novel data
Externí odkaz:
http://arxiv.org/abs/2306.00928
Autor:
Ghosh, Sreyan, Ramaneswaran, S, Tyagi, Utkarsh, Srivastava, Harshvardhan, Lepcha, Samden, Sakshi, S, Manocha, Dinesh
Expression of emotions is a crucial part of daily human communication. Emotion recognition in conversations (ERC) is an emerging field of study, where the primary task is to identify the emotion behind each utterance in a conversation. Though a lot o
Externí odkaz:
http://arxiv.org/abs/2203.16799
In this paper, we propose MMER, a novel Multimodal Multi-task learning approach for Speech Emotion Recognition. MMER leverages a novel multimodal network based on early-fusion and cross-modal self-attention between text and acoustic modalities and so
Externí odkaz:
http://arxiv.org/abs/2203.16794
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Ramaneswaran, S.1 (AUTHOR), Srinivasan, Kathiravan2 (AUTHOR), Vincent, P. M. Durai Raj1 (AUTHOR), Chang, Chuan-Yu3 (AUTHOR)
Publikováno v:
Computational & Mathematical Methods in Medicine. 7/24/2021, p1-10. 10p.
Publikováno v:
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages.
Conference
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.