Learning Local Feature Descriptions in 3D Ultrasound
Autor: | Svenja Ipsen, Daniel Wulff, Floris Ernst, Jannis Hagenah |
---|---|
Rok vydání: | 2020 |
Předmět: |
Modality (human–computer interaction)
Cross-correlation medicine.diagnostic_test Computer science business.industry Deep learning Pattern recognition 02 engineering and technology Autoencoder 030218 nuclear medicine & medical imaging 03 medical and health sciences Identification (information) 0302 clinical medicine Feature (computer vision) 0202 electrical engineering electronic engineering information engineering medicine 020201 artificial intelligence & image processing 3D ultrasound Artificial intelligence business Feature learning |
Zdroj: | BIBE |
DOI: | 10.1109/bibe50027.2020.00059 |
Popis: | Tools for automatic image analysis are gaining importance in the clinical workflow, ranging from time-saving tools in diagnostics to real-time methods in image-guided interventions. Over the last years, ultrasound (US) imaging has become a promising modality for image guidance due to its ability to provide volumetric images of soft tissue in real-time without using ionizing radiation. One key challenge in automatic US image analysis is the identification of suitable features to describe the image or regions within, e.g. for recognition, alignment or tracking tasks. In recent years, features that were learned data-drivenly provided promising results. Even though these approaches outperformed hand-crafted feature extractors in many applications, there is still a lack of feature learning for local description of three-dimensional US (3DUS) images. In this work, we present a completely data-driven feature learning approach for 3DUS images for usage in target tracking. To this end, we use a 3D convolutional autoencoder (AE) with a custom loss function to encode 3DUS image patches into a compact latent space that serves as a general feature description. For evaluation, we trained and tested the proposed architecture on 3DUS images of the liver and prostate of five different subjects and assessed the similarity between the decoded patches and the original ones. Subject-and organ-specific as well as general AEs are trained and evaluated. Specific AEs could reconstruct patches with a mean Normalized Cross Correlation of 0.85 and 0.81 at maximum in liver and prostate, respectively. It can also be shown that the AEs are transferable across subjects and organs, with a small accuracy decrease to 0.83 and 0.81 (liver, prostate) for general AEs. In addition, a first tracking study was performed to show feasibility of tracking in latent space. In this work, we could show that it is possible to train an AE that is transferable across two target regions and several subjects. Hence, convolutional AEs present a promising approach for creating a general feature extractor for 3DUS. |
Databáze: | OpenAIRE |
Externí odkaz: |