Improved Barrett's cancer detection in volumetric laser endomicroscopy scans using multiple-frame voting

Autor: Rikos, A., van der Sommen, F., Zinger, S., de With, P.H.N., Curvers, W.L., Schoon, E.J. (Erik), Swager, A.-F., Bergman, J.J. (Jacques), Bamidis, Panagiotis D., Konstantinidis, Stathis Th., Rodrigues, Pedro Pereira
Přispěvatelé: Electrical Engineering, Video Coding & Architectures, Biomedical Diagnostics Lab
Jazyk: angličtina
Rok vydání: 2017
Předmět:
Zdroj: CBMS
30th IEEE International Conference on Computer-Based Medical Systems (CBMS), 20-22 June 2017, Thessaloniki, Greece, 708-713
STARTPAGE=708;ENDPAGE=713;TITLE=30th IEEE International Conference on Computer-Based Medical Systems (CBMS), 20-22 June 2017, Thessaloniki, Greece
Popis: This paper explores the feasibility of using multiframe analysis to increase the classification performance of machine learning methods for cancer detection in Volumetric Laser Endomicroscopy (VLE). VLE is a novel and promising modality for the detection of neoplasia in patients with Baretts Esophagus (BE). It produces hundreds of high-resolution, cross-sectional images of the esophagus and offers considerable advantages compared to current methods. While some recent studies have proposed cancer detection algorithms for single VLE frames, the study described in this paper is the first to make use of VLE volumes for the differentiation between dysplastic and non-dysplastic tissue. We explore the use of various voting schemes for a broad range of features and classification methods. Our results demonstrate that multi-frame analysis leads to superior performance, irrespective of the chosen feature-classifier combination. By using multi-frame analysis with straightforward voting methods, the Area Under the receiver operating Curve (AUC) is increased by an average of over 12% compared to using single VLE frames. When only considering methods that achieve expert performance or higher (AUC≥0.81), an even larger performance improvement of up to 16.9% is observed. Furthermore, with many feature/classifier combinations showing AUC values ranging from 0.90 to 0.98, our experiments indicate that computeraided methods can considerably outperform medical experts, who demonstrate an AUC of 0.81 using a recently proposed clinical prediction model.
Databáze: OpenAIRE