Utilizing BERT Pretrained Models with Various Fine-Tune Methods for Subjectivity Detection
Autor: | Hairong Huo, Mizuho Iwaihara |
---|---|
Rok vydání: | 2020 |
Předmět: |
Subjectivity
050101 languages & linguistics Boosting (machine learning) Computer science business.industry 05 social sciences Sentiment analysis Multi-task learning 02 engineering and technology computer.software_genre Task (project management) Antecedent (grammar) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing 0501 psychology and cognitive sciences Language model Artificial intelligence business computer Sentence Natural language processing |
Zdroj: | Web and Big Data ISBN: 9783030602895 APWeb/WAIM (2) |
DOI: | 10.1007/978-3-030-60290-1_21 |
Popis: | As an essentially antecedent task of sentiment analysis, subjectivity detection refers to classifying sentences to be subjective ones containing opinions, or objective and neutral ones without bias. In the situations where impartial language is required, such as Wikipedia, subjectivity detection could play an important part. Recently, pretrained language models have proven to be effective in learning representations, profoundly boosting the performance among several NLP tasks. As a state-of-art pretrained model, BERT is trained on large unlabeled data with masked word prediction and next sentence prediction tasks. In this paper, we mainly explore utilizing BERT pretrained models with several combinations of fine-tuning methods, holding the intention to enhance performance in subjectivity detection task. Our experimental results reveal that optimum combinations of fine-tune and multi-task learning surplus the state-of-the-art on subjectivity detection and related tasks. |
Databáze: | OpenAIRE |
Externí odkaz: |