Abstrakt: |
Background: Due to technical constraints, dual‐source dual‐energy CT scans may lack spectral information in the periphery of the patient. Purpose: Here, we propose a deep learning‐based iterative reconstruction to recover the missing spectral information outside the field of measurement (FOM) of the second source‐detector pair. Methods: In today's Siemens dual‐source CT systems, one source‐detector pair (referred to as A) typically has a FOM of about 50 cm, while the FOM of the other pair (referred to as B) is limited by technical constraints to a diameter of about 35 cm. As a result, dual‐energy applications are currently only available within the small FOM, limiting their use for larger patients. To derive a reconstruction at B's energy for the entire patient cross‐section, we propose a deep learning‐based iterative reconstruction. Starting with A's reconstruction as initial estimate, it employs a neural network in each iteration to refine the current estimate according to a raw data fidelity measure. Here, the corresponding mapping is trained using simulated chest, abdomen, and pelvis scans based on a data set containing 70 full body CT scans. Finally, the proposed approach is tested on simulated and measured dual‐source dual‐energy scans and compared against existing reference approaches. Results: For all test cases, the proposed approach was able to provide artifact‐free CT reconstructions of B for the entire patient cross‐section. Considering simulated data, the remaining error of the reconstructions is between 10 and 17 HU on average, which is about half as low as the reference approaches. A similar performance with an average error of 8 HU could be achieved for real phantom measurements. Conclusions: The proposed approach is able to recover missing dual‐energy information for patients exceeding the small 35 cm FOM of dual‐source CT systems. Therefore, it potentially allows to extend dual‐energy applications to the entire‐patient cross section. [ABSTRACT FROM AUTHOR] |