Abstrakt: |
Social bots have become increasingly prominent in Online Social Networks, imitating human behavior and raising concerns about their deceptive capabilities. With advances in Generative AI, these bots are now able to generate highly realistic and complex content, making detection a significant challenge. While various bot detection approaches exist, their effectiveness has not been thoroughly evaluated. In this study, we examine the behavior of a text-based bot detector across three key scenarios: adversarial interactions between bots and detectors, attack examples generated by bots to poison datasets, and cross-domain analysis of different types of bots. Our findings show that detection performance varies significantly across bot types. Models trained on commercial bots struggle to detect attack samples accurately, while models trained on political and financial bots perform better. Furthermore, weak discriminators in the detection model can lead to issues like mode collapse, which can be addressed by employing techniques such as autoencoders, Energy-based GANs, or stronger cost functions. This analysis highlights the need to better understand feature importance in bot detection models, as well as to refine pre-processing steps to improve detection accuracy across different social bot domains. [ABSTRACT FROM AUTHOR] |