Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Yenigalla, Promod"'
Autor:
Shrimal, Anubhav, Kanagaraj, Stanley, Biswas, Kriti, Raghuraman, Swarnalatha, Nediyanchath, Anish, Zhang, Yi, Yenigalla, Promod
Large language model advancements have enabled the development of multi-agent frameworks to tackle complex, real-world problems such as to automate tasks that require interactions with diverse tools, reasoning, and human collaboration. We present MAR
Externí odkaz:
http://arxiv.org/abs/2410.21784
Summarizing customer feedback to provide actionable insights for products/services at scale is an important problem for businesses across industries. Lately, the review volumes are increasing across regions and languages, therefore the challenge of a
Externí odkaz:
http://arxiv.org/abs/2410.09991
Autor:
Mukku, Sandeep Sricharan, Soni, Manan, Rana, Jitenkumar, Aggarwal, Chetan, Yenigalla, Promod, Patange, Rashmi, Mohan, Shyam
We propose InsightNet, a novel approach for the automated extraction of structured insights from customer reviews. Our end-to-end machine learning framework is designed to overcome the limitations of current solutions, including the absence of struct
Externí odkaz:
http://arxiv.org/abs/2405.07195
NER has been traditionally formulated as a sequence labeling task. However, there has been recent trend in posing NER as a machine reading comprehension task (Wang et al., 2020; Mengge et al., 2020), where entity name (or other information) is consid
Externí odkaz:
http://arxiv.org/abs/2205.05904
The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research. Current architectures only take care of semantic and contextual information for a given query and fail to
Externí odkaz:
http://arxiv.org/abs/1912.10160
This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition o
Externí odkaz:
http://arxiv.org/abs/1906.08873
This paper proposes a Residual Convolutional Neural Network (ResNet) based on speech features and trained under Focal Loss to recognize emotion in speech. Speech features such as Spectrogram and Mel-frequency Cepstral Coefficients (MFCCs) have shown
Externí odkaz:
http://arxiv.org/abs/1906.05682
This paper proposes a speech emotion recognition method based on speech features and speech transcriptions (text). Speech features such as Spectrogram and Mel-frequency Cepstral Coefficients (MFCC) help retain emotion-related low-level characteristic
Externí odkaz:
http://arxiv.org/abs/1906.05681
Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target
Externí odkaz:
http://arxiv.org/abs/1801.06480