Multi-Parametric Fusion of 3D Power Doppler Ultrasound for Fetal Kidney Segmentation Using Fully Convolutional Neural Networks
Autor: | Gordon N. Stevenson, Nipuna H. Weerasinghe, Nigel H. Lovell, Alec W. Welsh |
---|---|
Rok vydání: | 2021 |
Předmět: |
Source code
Computer science media_common.quotation_subject Normalization (image processing) Kidney Convolutional neural network 030218 nuclear medicine & medical imaging Reduction (complexity) 03 medical and health sciences 0302 clinical medicine Health Information Management Image Processing Computer-Assisted medicine Humans 3D ultrasound Segmentation Electrical and Electronic Engineering media_common medicine.diagnostic_test business.industry Reproducibility of Results Ultrasonography Doppler Pattern recognition Image segmentation Computer Science Applications Hausdorff distance Neural Networks Computer Artificial intelligence business 030217 neurology & neurosurgery Biotechnology |
Zdroj: | IEEE Journal of Biomedical and Health Informatics. 25:2050-2057 |
ISSN: | 2168-2208 2168-2194 |
DOI: | 10.1109/jbhi.2020.3027318 |
Popis: | Kidney development is key to the long-term health of the fetus. Renal volume and vascularity assessed by 3D ultrasound (3D-US) are known markers of wellbeing, however, a lack of real-time image segmentation solutions preclude these measures being used in a busy clinical environment. In this work, we aimed to automate kidney segmentation using fully convolutional neural networks (fCNNs). We used multi-parametric input fusion incorporating 3D B-Mode and power Doppler (PD) volumes, aiming to improve segmentation accuracy. Three different fusion strategies and their performance were assessed versus a single input (B-Mode) network. Early input-level fusion provided the best segmentation accuracy with an average Dice similarity coefficient (DSC) of 0.81 and Hausdorff distance (HD) of 8.96 mm, an improvement of 0.06 DSC and reduction of 1.43 mm HD compared to our baseline network. Compared to manual segmentation for all models, repeatability was assessed by intra-class correlation coefficients (ICC) indicating good to excellent reproducibility (ICC $>=$ 0.93). The framework was extended to support multiple graphics processing units (GPUs) to better handle volumetric data, dense fCNN models, batch normalization and complex fusion networks. This work and available source code provides a framework to increase the parameter space of encoder-decoder style fCNNs across multiple GPUs and shows that application of multi-parametric 3D-US in fCNN training improves segmentation accuracy. |
Databáze: | OpenAIRE |
Externí odkaz: |