Abstrakt: |
In this work, we compare the performance of a machine learning framework based on a support vector machine (SVM) with fastText embeddings, and a Deep Learning framework consisting on fine-tuning Large Language Models (LLMs) like Bidirectional Encoder Representations from Transformers (BERT), DistilBERT, and Twitter roBERTa Base, to automate the classification of text data to analyze the country image of Mexico in selected data sources, which is described using 18 different classes, based in International Relations theory. To train each model, a data set consisting of tweets from relevant selected Twitter accounts and news headlines from The New York Times is used, based on an initial manual classification of all the entries. However, the data set presents issues in the form of imbalanced classes and few data. Thus, a series of text augmentation techniques are explored: gradual augmentation of the eight less represented classes and an uniform augmentation of the data set. Also, we study the impact of hashtags, user names, stopwords, and emojis as additional text features for the SVM model. The results of the experiments indicate that the SVM reacts negatively to all the data augmentation proposals, while the Deep Learning one shows small benefits from them. The best result of 52.92%, in weighted-average F1score, is obtained by fine-tuning the Twitter roBERTa Base model without data augmentation. |