ALBERT-based fine-tuning model for cyberbullying analysis

Autor: Jatin Karthik Tripathy, V. Vaidehi, Suresh Chandra Satapathy, Madhulika Sahoo, S. Sibi Chakkaravarthy
Rok vydání: 2020
Předmět:
Zdroj: Multimedia Systems. 28:1941-1949
ISSN: 1432-1882
0942-4962
Popis: With the world’s interaction moving more and more toward using online social media platforms, the advent of cyberbullying has also raised its head. Multiple forms of cyberbullying exist from the more common text based to images or even videos, and this paper will explore the context of textual comments. Even in the niche area of considering only text-based data, several approaches have already been worked upon such as n-grams, recurrent units, convolutional neural networks (CNNs), gated recurrent unit (GRU) and even a combination of the mentioned architectures. While all of these produce workable results, the main point of contention is that true contextual understanding is quite a complex concept. These methods fail due to two simple reasons: (i) lack of large datasets to properly utilize these architectures and (ii) the fact that understanding context requires some mechanism of remembering history that is only present in the recurrent units. This paper explores some of the recent approaches to the difficulties of contextual understanding and proposes an ALBERT-based fine-tuned model that achieves state-of-the-art results. ALBERT is a transformer-based architecture and thus even at its untrained form provides better contextual understanding than other recurrent units. This coupled with the fact that ALBERT is pre-trained on a large corpus allowing the flexibility to use a smaller dataset for fine-tuning as the pre-trained model already has deep understanding of the complexities of the human language. ALBERT showcases high scores in multiple benchmarks such as the GLUE and SQuAD showing that high levels of contextual understanding are inherently present and thus fine-tuning for the specific case of cyberbullying allows to use this to our advantage. With this approach, we have achieved an F1 score of 95% which surpasses current approaches such as the CNN + wordVec, CNN + GRU and BERT implementations.
Databáze: OpenAIRE