Zobrazeno 1 - 10
of 78
pro vyhledávání: '"Joshi, Tarun"'
The advent of transformer-based architectures and large language models (LLMs) have significantly advanced the performance of natural language processing (NLP) models. Since these LLMs are trained on huge corpuses of data from the web and other sourc
Externí odkaz:
http://arxiv.org/abs/2408.00612
Recent work in behavioral testing for natural language processing (NLP) models, such as Checklist, is inspired by related paradigms in software engineering testing. They allow evaluation of general linguistic capabilities and domain understanding, he
Externí odkaz:
http://arxiv.org/abs/2408.00161
This paper surveys the current state of the art in document automation (DA). The objective of DA is to reduce the manual effort during the generation of documents by automatically creating and integrating input from different sources and assembling d
Externí odkaz:
http://arxiv.org/abs/2308.09341
Paraphrase generation is a difficult problem. This is not only because of the limitations in text generation capabilities but also due that to the lack of a proper definition of what qualifies as a paraphrase and corresponding metrics to measure how
Externí odkaz:
http://arxiv.org/abs/2205.13119
Recent years have seen a growing adoption of Transformer models such as BERT in Natural Language Processing and even in Computer Vision. However, due to their size, there has been limited adoption of such models within resource-constrained computing
Externí odkaz:
http://arxiv.org/abs/2110.15225
This paper surveys the current state of the art in document automation (DA). The objective of DA is to reduce the manual effort during the generation of documents by automatically integrating input from different sources and assembling documents conf
Externí odkaz:
http://arxiv.org/abs/2109.11603
Publikováno v:
In Physica B: Condensed Matter 1 June 2024 682
Deep learning models for natural language processing (NLP) are inherently complex and often viewed as black box in nature. This paper develops an approach for interpreting convolutional neural networks for text classification problems by exploiting t
Externí odkaz:
http://arxiv.org/abs/2105.08589
Autor:
Singh, Rahul, Jindal, Karan, Yu, Yufei, Yang, Hanyu, Joshi, Tarun, Campbell, Matthew A., Shoumaker, Wayne B.
This paper proposes a strategy to assess the robustness of different machine learning models that involve natural language processing (NLP). The overall approach relies upon a Search and Semantically Replace strategy that consists of two steps: (1) S
Externí odkaz:
http://arxiv.org/abs/2104.09978
Grammar error handling (GEH) is an important topic in natural language processing (NLP). GEH includes both grammar error detection and grammar error correction. Recent advances in computation systems have promoted the use of deep learning (DL) models
Externí odkaz:
http://arxiv.org/abs/2009.02358