The CrowdGleason dataset: Learning the Gleason grade from crowds and experts.

Autor: López-Pérez M; Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Spain. Electronic address: mlopper3@upv.es., Morquecho A; Department of Computer Science and Artificial Intelligence, Universidad de Granada, Granada, Spain. Electronic address: e.amorq@go.ugr.es., Schmidt A; Department of Computer Science and Artificial Intelligence, Universidad de Granada, Granada, Spain. Electronic address: arne@decsai.ugr.es., Pérez-Bueno F; Basque Center on Cognition, Brain and Language, Donostia - San Sebastián, Spain. Electronic address: fperezbueno@bcbl.eu., Martín-Castro A; Department of Pathology, Virgen de las Nieves University Hospital, 18014 Granada, Spain. Electronic address: amartincastro@ugr.es., Mateos J; Department of Computer Science and Artificial Intelligence, Universidad de Granada, Granada, Spain. Electronic address: jmd@decsai.ugr.es., Molina R; Department of Computer Science and Artificial Intelligence, Universidad de Granada, Granada, Spain. Electronic address: rms@decsai.ugr.es.
Jazyk: angličtina
Zdroj: Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2024 Dec; Vol. 257, pp. 108472. Date of Electronic Publication: 2024 Oct 28.
DOI: 10.1016/j.cmpb.2024.108472
Abstrakt: Background: Currently, prostate cancer (PCa) diagnosis relies on the human analysis of prostate biopsy Whole Slide Images (WSIs) using the Gleason score. Since this process is error-prone and time-consuming, recent advances in machine learning have promoted the use of automated systems to assist pathologists. Unfortunately, labeled datasets for training and validation are scarce due to the need for expert pathologists to provide ground-truth labels.
Methods: This work introduces a new prostate histopathological dataset named CrowdGleason, which consists of 19,077 patches from 1045 WSIs with various Gleason grades. The dataset was annotated using a crowdsourcing protocol involving seven pathologists-in-training to distribute the labeling effort. To provide a baseline analysis, two crowdsourcing methods based on Gaussian Processes (GPs) were evaluated for Gleason grade prediction: SVGPCR, which learns a model from the CrowdGleason dataset, and SVGPMIX, which combines data from the public dataset SICAPv2 and the CrowdGleason dataset. The performance of these methods was compared with other crowdsourcing and expert label-based methods through comprehensive experiments.
Results: The results demonstrate that our GP-based crowdsourcing approach outperforms other methods for aggregating crowdsourced labels (κ=0.7048±0.0207) for SVGPCR vs.(κ=0.6576±0.0086) for SVGP with majority voting). SVGPCR trained with crowdsourced labels performs better than GP trained with expert labels from SICAPv2 (κ=0.6583±0.0220) and outperforms most individual pathologists-in-training (mean κ=0.5432). Additionally, SVGPMIX trained with a combination of SICAPv2 and CrowdGleason achieves the highest performance on both datasets (κ=0.7814±0.0083 and κ=0.7276±0.0260).
Conclusion: The experiments show that the CrowdGleason dataset can be successfully used for training and validating supervised and crowdsourcing methods. Furthermore, the crowdsourcing methods trained on this dataset obtain competitive results against those using expert labels. Interestingly, the combination of expert and non-expert labels opens the door to a future of massive labeling by incorporating both expert and non-expert pathologist annotators.
Competing Interests: Declaration of competing interest This statement is to certify that all Authors have seen and approved the manuscript being submitted. We warrant that the article is the Authors’ original work. We warrant that the article has not received prior publication and is not under consideration for publication elsewhere. On behalf of all Co-Authors, the corresponding Author shall bear full responsibility for the submission. This research has not been submitted for publication nor has it been published in whole or in part elsewhere. We attest to the fact that all Authors listed on the title page have contributed significantly to the work, have read the manuscript, attest to the validity and legitimacy of the data and its interpretation, and agree to its submission to the Journal of Computer Methods and Programs in Biomedicine.
(Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)
Databáze: MEDLINE