Unsupervised Segmentation of 3D Microvascular Photoacoustic Images Using Deep Generative Learning.
Autor: | Sweeney PW; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK., Hacker L; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK., Lefebvre TL; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK., Brown EL; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK., Gröhl J; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK., Bohndiek SE; Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK.; Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK. |
---|---|
Jazyk: | angličtina |
Zdroj: | Advanced science (Weinheim, Baden-Wurttemberg, Germany) [Adv Sci (Weinh)] 2024 Aug; Vol. 11 (32), pp. e2402195. Date of Electronic Publication: 2024 Jun 23. |
DOI: | 10.1002/advs.202402195 |
Abstrakt: | Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function. (© 2024 The Author(s). Advanced Science published by Wiley‐VCH GmbH.) |
Databáze: | MEDLINE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |