Zobrazeno 1 - 10
of 42
pro vyhledávání: '"Chaturvedi, Akshay"'
Autor:
Chaturvedi, Akshay, Asher, Nicholas
In this paper, we study whether transformer-based language models can extract predicate argument structure from simple sentences. We firstly show that language models sometimes confuse which predicates apply to which objects. To mitigate this, we exp
Externí odkaz:
http://arxiv.org/abs/2410.03203
This paper provides the first discourse parsing experiments with a large language model(LLM) finetuned on corpora annotated in the style of SDRT (Segmented Discourse Representation Theory Asher, 1993; Asher and Lascarides, 2003). The result is a disc
Externí odkaz:
http://arxiv.org/abs/2406.18256
When engaging in collaborative tasks, humans efficiently exploit the semantic structure of a conversation to optimize verbal and nonverbal interactions. But in recent "language to code" or "language to action" models, this information is lacking. We
Externí odkaz:
http://arxiv.org/abs/2406.18164
With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks. The list of LLM successes is long and varied. Nevertheless, several recent
Externí odkaz:
http://arxiv.org/abs/2306.12213
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their represe
Externí odkaz:
http://arxiv.org/abs/2212.10696
Autor:
Chaturvedi, Akshay1 (AUTHOR) akshay91.isi@gmail.com, Bhar, Swarnadeep1 (AUTHOR) swarnadeep.bhar@irit.fr, Saha, Soumadeep2 (AUTHOR) soumadeep.saha97@gmail.com, Garain, Utpal2 (AUTHOR) utpal@isical.ac.in, Asher, Nicholas1 (AUTHOR) nicholas.asher@irit.fr
Publikováno v:
Computational Linguistics. Mar2024, Vol. 50 Issue 1, p119-155. 37p.
Many recent studies have shown that deep neural models are vulnerable to adversarial samples: images with imperceptible perturbations, for example, can fool image classifiers. In this paper, we present the first type-specific approach to generating a
Externí odkaz:
http://arxiv.org/abs/2006.03184
Neural machine translation (NMT) systems have been shown to give undesirable translation when a small change is made in the source sentence. In this paper, we study the behaviour of NMT systems when multiple changes are made to the source sentence. I
Externí odkaz:
http://arxiv.org/abs/1908.01165
Autor:
Chaturvedi, Akshay, Garain, Utpal
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems (2020)
At present, adversarial attacks are designed in a task-specific fashion. However, for downstream computer vision tasks such as image captioning, image segmentation etc., the current deep learning systems use an image classifier like VGG16, ResNet50,
Externí odkaz:
http://arxiv.org/abs/1906.04606
Autor:
Gauraha, Niharika, Chaturvedi, Akshay
To estimate the conditional probability functions based on the direct problem setting, V-matrix based method was proposed. We construct V-matrix based constrained quadratic programming problems for which the inequality constraints are inconsistent. I
Externí odkaz:
http://arxiv.org/abs/1809.01706