TextGram: Towards a better domain-adaptive pretraining

Autor: Hiwarkhedkar, Sharayu, Mittal, Saloni, Magdum, Vidula, Dhekane, Omkar, Joshi, Raviraj, Kale, Geetanjali, Ladkat, Arnav
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1007/978-3-031-58495-4_12
Popis: For green AI, it is crucial to measure and reduce the carbon footprint emitted during the training of large language models. In NLP, performing pre-training on Transformer models requires significant computational resources. This pre-training involves using a large amount of text data to gain prior knowledge for performing downstream tasks. Thus, it is important that we select the correct data in the form of domain-specific data from this vast corpus to achieve optimum results aligned with our domain-specific tasks. While training on large unsupervised data is expensive, it can be optimized by performing a data selection step before pretraining. Selecting important data reduces the space overhead and the substantial amount of time required to pre-train the model while maintaining constant accuracy. We investigate the existing selection strategies and propose our own domain-adaptive data selection method - TextGram - that effectively selects essential data from large corpora. We compare and evaluate the results of finetuned models for text classification task with and without data selection. We show that the proposed strategy works better compared to other selection methods.
Comment: Accepted at SPELLL 2023
Databáze: arXiv