Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Taware, Rutuja"'
Autor:
Sharma, Mandar, Taware, Rutuja Murlidhar, Koirala, Pravesh, Muralidhar, Nikhil, Ramakrishnan, Naren
Off-the-shelf pre-trained language models have become the de facto standard in NLP pipelines for a multitude of downstream tasks. However, the inability of these models to properly encode numerals limits their performance on tasks requiring numeric c
Externí odkaz:
http://arxiv.org/abs/2404.01536
Autor:
Taware, Rutuja Murlidhar
Usage of language models in an in-context learning environment has been adapted for a wide range of tasks. Recent works have showcased the impact of pretraining data on the in-context performance of language models. In this work, we experiment with n
Externí odkaz:
http://hdl.handle.net/10919/115712
Autor:
Taware, Rutuja, Varat, Shraddha, Salunke, Gaurav, Gawande, Chaitanya, Kale, Geetanjali, Khengare, Rahul, Joshi, Raviraj
Text classification is the most basic natural language processing task. It has a wide range of applications ranging from sentiment analysis to topic classification. Recently, deep learning approaches based on CNN, LSTM, and Transformers have been the
Externí odkaz:
http://arxiv.org/abs/2102.00238