On Improving Breast Density Segmentation Using Conditional Generative Adversarial Networks
Autor: | Saffari N, Rashwan HA, Herrera B, Romani S, Arenas M, Puig D |
---|---|
Přispěvatelé: | Universitat Rovira i Virgili |
Rok vydání: | 2018 |
Předmět: |
Medicina ii
Artificial intelligence Generative adversarial networks Información y documentación Ciências agrárias i Deep learning Interdisciplinar Engenharias iv Comunicació i informació Breast cancer General o multidisciplinar Breast density estimation Mammograms skin and connective tissue diseases Engenharias iii |
Zdroj: | Frontiers In Artificial Intelligence And Applications Frontiers In Artificial Intelligence And Applications. 308 386-393 |
DOI: | 10.3233/978-1-61499-918-8-386 |
Popis: | © 2018 The authors and IOS Press. Breast density is a crucial factor to follow-up the relapse of breast cancer in mammograms and the risk of local recurrence after conservative surgery and/or radiotherapy. Accurate breast density estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. The important key of breast density estimation is to properly detect the dense tissues in a mammographic image. Thus, this paper presents an automatic deep breast density segmentation using conditional Generative Adversarial Networks (cGAN) that consist of two successive deep networks: generator and discriminator. The generator network learns the mapping from the input mammogram to the output binary mask detection the area of the dense tissues. In turn, the discriminator learns a loss function to train this mapping by comparing the ground-truth and the predicted mask under observing the input mammogram as a condition. The performance of the proposed model was evaluated on the public INbreast mammographic datasets. The proposed model can segment the dense regions with overall recall, precision and F-score about 95%, 92%, and 93%, respectively, outperforming state-of-the-art of breast density segmentation. The proposed model can segment more than 40 images with a size of 512×512 per second on a recent GPU. |
Databáze: | OpenAIRE |
Externí odkaz: |