Abstrakt: |
Brain tumors are characterized by the fast growth of aberrant brain cells, which poses a considerable risk to an adult's health since it can result in severe organ malfunction or even death. Magnetic resonance imaging (MRI) provides vital information for comprehending the nature of brain tumors, directing treatment approaches, and enhancing diagnostic precision. It displays the diversity and heterogeneity of brain tumors in terms of size, texture, and location. However, manually identifying brain tumors is a difficult and time-consuming process that could result in errors. It is proposed that an enhanced You Only Look Once version 8 (YOLOv8) model aids in mitigating the drawbacks associated with manual tumor detection, with the objective of enhancing the accuracy of brain tumor detection. The model employs the C2f_DySnakeConv module to improve the perception and discrimination of tumors. Additionally, it integrates Content-Aware ReAssembly of FEatures (CARAFE) to efficiently expand the network's receptive area to integrate more global contextual information, and Efficient Multi-Scale Attention (EMA) to improve the network's sensitivity and resolution for lesion features. According to the experimental results, the improved model performs better for brain tumor detection than both the open-source model and the original YOLOv8 model. It also achieves higher detection accuracy on the brain tumor image dataset than the original YOLOv8 model in terms of precision, recall, mAP@0.5, and mAP@0.5~0.95 above, respectively, of 2.71%, 2.34%, 2.24%, and 3.73%. [ABSTRACT FROM AUTHOR] |