Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Siddhant Garg"'
Publikováno v:
AAAI
We propose TANDA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality da
Publikováno v:
CIKM
Large datasets in NLP suffer from noisy labels, due to erroneous automatic and human annotation procedures. We study the problem of text classification with label noise, and aim to capture this noise through an auxiliary noise model over the classifi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::be36c244f1ed823be56b8c3285587b35
http://arxiv.org/abs/2101.11214
http://arxiv.org/abs/2101.11214
Publikováno v:
CIKM
Adversarial machine learning has exposed several security hazards of neural models and has become an important research topic in recent times. Thus far, the concept of an "adversarial perturbation" has exclusively been used with reference to the inpu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e2f84135a0fe39b691169a6ac57ff9d2
http://arxiv.org/abs/2008.01761
http://arxiv.org/abs/2008.01761
Autor:
Siddhant Garg, Goutham Ramakrishnan
Publikováno v:
EMNLP (1)
Modern text classification models are susceptible to adversarial examples, perturbed versions of the original text indiscernible by humans which get misclassified by the model. Recent works in NLP use rule-based synonym replacement strategies to gene
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::48dcdf4706d50ee7b6195b5090ada9a6
Publikováno v:
IVCNZ
Convolutional Neural Networks with Adaptive Inference Graphs (ConvNet-AIG) use adaptive network topologies through on/off gating on network layers for individual images to achieve improved computational efficiency and classification accuracy. Face re
Publikováno v:
EMNLP
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus
Publikováno v:
Advances in Intelligent Systems and Computing ISBN: 9788132227328
A typical web user imposed small and vague queries onto web based search engines, which requires higher time for query formulation. In this paper, a nature inspired optimization approach on term graph is employed in order to provide query suggestion
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::2262936c133ba8df77d454ab05577dfe
https://doi.org/10.1007/978-81-322-2734-2_46
https://doi.org/10.1007/978-81-322-2734-2_46