Deep Transformers For Fast Small Intestine Grounding In Capsule Endoscope Video
Autor: | Xutao Lin, Xinkai Zhao, Chaowei Fang, Feng Gao, Guanbin Li, De-Jun Fan |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Artificial neural network Endoscope Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Frame (networking) Computer Science - Computer Vision and Pattern Recognition Image segmentation 010501 environmental sciences 01 natural sciences law.invention 03 medical and health sciences 0302 clinical medicine Search algorithm Capsule endoscopy law Fuse (electrical) 030211 gastroenterology & hepatology Computer vision Artificial intelligence business 0105 earth and related environmental sciences Transformer (machine learning model) |
Zdroj: | ISBI |
DOI: | 10.1109/isbi48211.2021.9433921 |
Popis: | Capsule endoscopy is an evolutional technique for examining and diagnosing intractable gastrointestinal diseases. Because of the huge amount of data, analyzing capsule endoscope videos is very time-consuming and labor-intensive for gastrointestinal medicalists. The development of intelligent long video analysis algorithms for regional positioning and analysis of capsule endoscopic video is therefore essential to reduce the workload of clinicians and assist in improving the accuracy of disease diagnosis. In this paper, we propose a deep model to ground shooting range of small intestine from a capsule endoscope video which has duration of tens of hours. This is the first attempt to attack the small intestine grounding task using deep neural network method. We model the task as a 3-way classification problem, in which every video frame is categorized into esophagus/stomach, small intestine or colorectum. To explore long-range temporal dependency, a transformer module is built to fuse features of multiple neighboring frames. Based on the classification model, we devise an efficient search algorithm to efficiently locate the starting and ending shooting boundaries of the small intestine. Without searching the small intestine exhaustively in the full video, our method is implemented via iteratively separating the video segment along the direction to the target boundary in the middle. We collect 113 videos from a local hospital to validate our method. In the 5-fold cross validation, the average IoU between the small intestine segments located by our method and the ground-truths annotated by broad-certificated gastroenterologists reaches 0.945. |
Databáze: | OpenAIRE |
Externí odkaz: |