Popis: |
Online platforms are fostering social interaction, but unfortunately, this has given rise to antisocial behaviors such as cyberbullying, trolling, and hate speech on a global scale. The detection of hate and aggression has become a vital aspect of combating cyberbullying and cyberharassment. Cyberbullying involves using aggressive and offensive language including rude, insulting, hateful, and teasing comments to harm individuals on social media platforms. Human moderation is both slow and expensive, making it impractical in the face of rapidly growing data. Automatic detection systems are essential to curb trolling effectively. This research deals with the challenge of automatically identifying cyberbullying in tweets from a publicly available cyberbullying dataset. This research work employs robustly optimized bidirectional encoder representations from the transformers approach (RoBERTa), utilizing global vectors for word representation (GloVe) word embedding features. The proposed approach is further compared with the state-of-the-art machine, deep, and transformer-based learning approaches with the FastText word embedding approach. Statistical results demonstrate that the proposed model outperforms others, achieving a 95% accuracy for detecting cyberbullying tweets. In addition, the model obtains 95%, 97%, and 96% for precision, recall, and F1 score, respectively. Results from k-fold cross-validation further affirm the supremacy of the proposed model with a mean accuracy of 95.07%. |