Efficient Fine-Tuning of BERT Models on the Edge

Autor: Vucetic, Danilo, Tayaranian, Mohammadreza, Ziaeefard, Maryam, Clark, James J., Meyer, Brett H., Gross, Warren J.
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1109/ISCAS48785.2022.9937567
Popis: Resource-constrained devices are increasingly the deployment targets of machine learning applications. Static models, however, do not always suffice for dynamic environments. On-device training of models allows for quick adaptability to new scenarios. With the increasing size of deep neural networks, as noted with the likes of BERT and other natural language processing models, comes increased resource requirements, namely memory, computation, energy, and time. Furthermore, training is far more resource intensive than inference. Resource-constrained on-device learning is thus doubly difficult, especially with large BERT-like models. By reducing the memory usage of fine-tuning, pre-trained BERT models can become efficient enough to fine-tune on resource-constrained devices. We propose Freeze And Reconfigure (FAR), a memory-efficient training regime for BERT-like models that reduces the memory usage of activation maps during fine-tuning by avoiding unnecessary parameter updates. FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%. More broadly, reductions in metric performance on the GLUE and SQuAD datasets are around 1% on average.
Comment: 4 pages, 2 figures, 3 tables. To be published in ISCAS 2022 and made available on IEEE Xplore
Databáze: arXiv