Autor: |
Panteris V; Gastroenterology Department, Sismanogleio General Hospital, Marousi, Greece.; Hellenic Society of Gastrointestinal Oncology., Feretzakis G; School of Science and Technology, Hellenic Open University, Patras, Greece.; Department of Quality Control, Research and Continuing Education, Sismanogleio General Hospital, Marousi, Greece., Karantanos P; Gastroenterology Department, Sismanogleio General Hospital, Marousi, Greece., Kalles D; School of Science and Technology, Hellenic Open University, Patras, Greece., Verykios VV; School of Science and Technology, Hellenic Open University, Patras, Greece., Panoutsakou M; Gastroenterology Department, Sismanogleio General Hospital, Marousi, Greece., Karagianni E; Gastroenterology Department, Sismanogleio General Hospital, Marousi, Greece., Zoubouli C; Pathology Department, Sismanogleio General Hospital, Marousi, Greece., Vgenopoulou S; Pathology Department, Sismanogleio General Hospital, Marousi, Greece., Pierrakou A; Pathology Department, Sismanogleio General Hospital, Marousi, Greece., Theodorakopoulou M; Pathology Department, Sismanogleio General Hospital, Marousi, Greece., Papalois AE; Special Unit for Biomedical Research and Education, School of Medicine, Aristotle University of Thessaloniki, Greece.; Hellenic Society of Gastrointestinal Oncology., Thomaidis T; 2nd Gastroenterology Department, Hygeia Hospital, Athens, Greece.; Hellenic Society of Gastrointestinal Oncology., Dalainas I; Administration, Sismanogleio General Hospital, Marousi, Greece., Kouroumalis E; Department of Gastroenterology, University of Crete Medical School, Heraklion, Greece. |
Abstrakt: |
The objective of this study was to compare different convolutional neural networks (CNNs), as employed in a Python-produced deep learning process, used on white light images of colorectal polyps acquired during the process of a colonoscopy, in order to estimate the accuracy of the optical recognition of particular histologic types of polyps. The TensorFlow framework was used for Inception V3, ResNet50, DenseNet121, and NasNetLarge, which were trained with 924 images, drawn from 86 patients. |