Popis: |
Computed tomography (CT) is the first modern slice-imaging modality. Recent years have witnessed its widespread application and improvement in detecting and diagnosing related lesions. Nonetheless, there are several difficulties in detecting lesions in CT images: (1) image quality degrades as the radiation dose is reduced to decrease radiational injury to the human body; (2) image quality is frequently hampered by noise interference; (3) because of the complicated circumstances of diseased tissue, lesion pictures typically show complex shapes; (4) the difference between the orientated object and the background is not discernible. This paper proposes a symmetry GAN detection network based on a one-stage detection network to tackle the challenges mentioned above. This paper employs the DeepLesion dataset, containing 10,594 CT scans (studies) of 4427 unique patients. The symmetry GANs proposed in this research consist of two distinct GAN models that serve different functions. A generative model is introduced ahead of the backbone to increase the input CT image series to address the typical problem of small sample size in medical datasets. Afterward, GAN models are added to the attention extraction module to generate attention masks. Furthermore, experimental data indicate that this strategy has significantly improved the model’s robustness. Eventually, the proposed method reaches 0.9720, 0.9858, and 0.9833 on P, R, and mAP, on the validation set. The experimental outcome shows that the suggested model outperforms other comparison models. In addition to this innovation, we are inspired by the innovation of the ResNet model in terms of network depth. Thus, we propose parallel multi-activation functions, an optimization method in the network width. It is theoretically proven that by adding coefficients to each base activation function and performing a softmax function on all coefficients, parallel multi-activation functions can express a single activation function, which is a unique ability compared to others. Ultimately, our model outperforms all comparison models in terms of P, R, and mAP, achieving 0.9737, 0.9845, and 0.9841. In addition, we encapsulate the model and build a related iOS application to make the model more applicable. The suggested model also won the second prize in the 2021 Chinese Collegiate Computing Competition. |