Automatic orientation cues for intuitive immersive interrogation of 3D echocardiographic images in virtual reality using deep learning
Autor: | Lindsay Munroe, En-Ju D. Lin, Alberto Gomez, Julia A. Schnabel, Gina Sajith, Gavin Wheeler, Kuberan Pushparajah, Suryava Bhattacharya, John M. Simpson, Shujie Deng |
---|---|
Rok vydání: | 2021 |
Předmět: |
Orientation (computer vision)
business.industry Deep learning Left atrium General Medicine Virtual reality Convolutional neural network medicine.anatomical_structure Human–computer interaction medicine Right atrium Radiology Nuclear Medicine and imaging Artificial intelligence Cardiology and Cardiovascular Medicine business Interrogation |
Zdroj: | European Heart Journal - Cardiovascular Imaging. 22 |
ISSN: | 2047-2412 2047-2404 |
DOI: | 10.1093/ehjci/jeaa356.407 |
Popis: | Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart Project Wellcome / EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z) onbehalf 3D Heart Project Background/Introduction: In echocardiography (echo), image orientation is determined by the position and direction of the transducer during examination, unlike cardiovascular imaging modalities such as CT or MRI. As a result, when echo images are first shown their display orientation has no external anatomical landmarks, thus the user has to identify anatomical landmarks in the regions of interest to understand the orientation. Purpose To display an anatomical model of a standard heart, automatically aligned to an acquired patient’s 3D echo image - assisting image interpretation by quickly orienting the viewer. Methods 47 echo datasets from 13 pediatric patients with hypoplastic left heart syndrome (HLHS) were annotated by manually indicating the cardiac axes in both ES and ED volumes. We chose a view akin to the standard four chamber view in healthy hearts as the reference view, showing the AV valves, the right atrium, the left atrium and the hypoplastic ventricle. We then trained a deep convolutional neural network (CNN) to predict the rotation required for re-orientation to the reference view. Three data strategies were explored: 1) using 3D images to estimate orientation, 2) using three orthogonal slices only (2.5D approach) and 3) using the central slice only (2D approach). Three different algorithms were investigated: 1) an orientation classifier, 2) an orientation regressor with mean absolute angle error, and 3) an orientation regressor with geodesic loss. The data was split into training, validation and test sets with a 8:1:1 ratio. The training data was augmented by applying random rotations in the range [−10◦, +10◦] and updating labels accordingly. The model with smallest validation error was applied in tandem with the VR visualisation of the echo volumes. Results Experimental results suggest that a 2.5D CNN classifying discrete integer angles performs best in re-orienting volumetric images to the reference view, with a mean absolute angle error on the test set of 9.0 deg (test set error ranges from 10.8 to 25.9 deg. for other algorithms). An HLHS volumetric data (left) is automatically aligned with the cardiac model (right) using our trained network when loaded in VR as shown in Figure 1. The volume and the model are both cropped at the referencing plane. Conclusion A deep learning network to align 3D echo images to a reference view was successfully trained and then integrated into VR to reorient echo volumes to match a standard anatomical view. This work demonstrates the potential of combining artificial intelligence and VR in medical imaging, although further user study is expected to evaluate its clinical impact. Caption for Abstract Picture The VR user interface informs the user of the 3D echo image orientation, automatically aligning it with an anatomical model, here showing the four chamber apical view. Abstract Figure. Deep learning model integrated into VR |
Databáze: | OpenAIRE |
Externí odkaz: |