Abstrakt: |
Fake news is a growing challenge for social networks and media. Detection of fake news always has been a problem for many years, but the evolution of social networks and increasing speed of news dissemination in recent years has been considered again. There are several approaches to solving this problem, one of which is to detect fake news based on its text style using deep neural networks. In recent years, transfer learning with transformers is one of the most used forms of deep neural networks for natural language processing. BERT is one of the most promising transformers that outperforms other models in many NLP benchmarks. In this article, we introduce MWPBert, which uses two parallel BERT networks to perform veracity detection on full-text news articles. One of the BERT networks encodes news headline, and another encodes news bodies. Since the input length of the BERT network is limited and constant and the news body is usually a long text, we cannot feed the whole text into the BERT. Therefore, using the MaxWorth algorithm, we selected the part of the news text that is more valuable for fact-checking, and fed it into the BERT network. Finally, we encode the output of the two BERT networks to an output network to classify the news. The experiment results showed that the proposed model outperformed previous models regarding accuracy and other performance measures. [ABSTRACT FROM AUTHOR] |