Autor: |
Hu, Junjie, Yu, Chengrong, Zhu, Shengqian, Zhang, Haixian |
Zdroj: |
International Journal of Intelligent Systems; 4/26/2024, Vol. 2024, p1-11, 11p |
Abstrakt: |
Precisely segmenting the organs at risk (OARs) in computed tomography (CT) plays an important role in radiotherapy's treatment planning, aiding in the protection of critical tissues during irradiation. Renowned deep convolutional neural networks (DCNNs) and prevailing transformer-based architectures are widely utilized to accomplish the segmentation task, showcasing advantages in capturing local and contextual characteristics. Graph convolutional networks (GCNs) are another specialized model designed for processing the nongrid dataset, e.g., citation relationship. The DCNNs and GCNs are considered as two distinct models applicable to the grid and nongrid datasets, respectively. Motivated by the recently developed dynamic-channel GCN (DCGCN) that attempts to leverage the graph structure to enhance the feature extracted by the DCNNs, this paper proposes a novel architecture termed adaptive sparse GCN (ASGCN) to mitigate the inherent limitations in DCGCN from the aspect of node's representation and adjacency matrix's construction. For the node's representation, the global average pooling used in the DCGCN is replaced by the learning mechanism to accommodate the segmentation task. For the adjacency matrix, an adaptive regularization strategy is leveraged to penalize the coefficient in the adjacency matrix, resulting in a sparse one that can better exploit the relationships between nodes. Rigorous experiments on multiple OARs' segmentation tasks of the head and neck demonstrate that the proposed ASGCN can effectively improve the segmentation accuracy. Comparison between the proposed method and other prevalent architectures further confirms the superiority of the ASGCN. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|