Utilizing BERT Pretrained Models with Various Fine-Tune Methods for Subjectivity Detection

Autor: Hairong Huo, Mizuho Iwaihara
Rok vydání: 2020
Předmět:
Zdroj: Web and Big Data ISBN: 9783030602895
APWeb/WAIM (2)
DOI: 10.1007/978-3-030-60290-1_21
Popis: As an essentially antecedent task of sentiment analysis, subjectivity detection refers to classifying sentences to be subjective ones containing opinions, or objective and neutral ones without bias. In the situations where impartial language is required, such as Wikipedia, subjectivity detection could play an important part. Recently, pretrained language models have proven to be effective in learning representations, profoundly boosting the performance among several NLP tasks. As a state-of-art pretrained model, BERT is trained on large unlabeled data with masked word prediction and next sentence prediction tasks. In this paper, we mainly explore utilizing BERT pretrained models with several combinations of fine-tuning methods, holding the intention to enhance performance in subjectivity detection task. Our experimental results reveal that optimum combinations of fine-tune and multi-task learning surplus the state-of-the-art on subjectivity detection and related tasks.
Databáze: OpenAIRE