Deep Learning Architectures for Medical Image Segmentation
Autor: | Jayanthi K B, Ramani Kuchelar, C. Rajasekaran, Sudha Subramaniam |
---|---|
Rok vydání: | 2020 |
Předmět: |
Pixel
Computer science business.industry Deep learning Feature extraction Pattern recognition 02 engineering and technology Image segmentation 01 natural sciences Convolutional neural network Region of interest 0103 physical sciences 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Segmentation Artificial intelligence business 010301 acoustics Spatial analysis |
Zdroj: | CBMS |
Popis: | Medical image segmentation is a bottleneck for physicians and radiologists in diagnosis of diseases. Deep learning based convolutional neural networks (CNNs) is used to support decision making in medical diagnosis. Three architectures are analyzed for segmentation of affected tissues. CNN with contraction path classifies the each pixel in the image into region of interest and region of non interest. The affected region is then segmented from RoI. Second architecture is developed with contraction and expansion paths (auto-encoders) to extract the spatial information of the pixels from the input image. The deconvolutional layer extracts the spatial information related to corresponding features but still fails to capture contextual dependent information of high-level features. In the third architecture, attention module with U-Net captures the contextual dependent information. Filter size, learning rate and k-fold cross validation are tuned to improve the accuracy and dice similarity coefficient (DSC). Filter size and k-fold cross validation are varied as 3x3, 5x5 and 7x7 and 3-fold, 5-fold and 10-fold respectively. The attention module helps to extract the spatial information of the high level features which are related to low level features. This gives better segmentation output. U-Net with attention module provides an accuracy 99.07 %, sensitivity 98.57 %, specificity 99.5 % and DSC 91.7%. |
Databáze: | OpenAIRE |
Externí odkaz: |