A Neurosurgical Instrument Segmentation Approach to Assess Microsurgical Movements.

Autor: DANILOV, Gleb, PILIPENKO, Oleg, KOSTYUMOV, Vasiliy, TRUBETSKOY, Sergey, MALOYAN, Narek, NUTFULLIN, Bulat, ILYUSHIN, Eugeniy, PITSKHELAURI, David, ZELENOVA, Alexandra, BYKANOV, Andrey
Zdroj: Studies in Health Technology & Informatics; 2024, Vol. 321, p185-189, 5p
Abstrakt: The ability to recognize anatomical landmarks, microsurgical instruments, and complex scenes and events in a surgical wound using computer vision presents new opportunities for studying microsurgery effectiveness. In this study, we aimed to develop an artificial intelligence-based solution for detecting, segmenting, and tracking microinstruments using a neurosurgical microscope. We have developed a technique to process videos from microscope camera, which involves creating a segmentation mask for the instrument and subsequently tracking it. We compared two segmentation approaches: (1) semantic segmentation using Visual Transformers (pre-trained domain-specific EndoViT model), enhanced with tracking as described by Cheng Y. et al. (our proposed approach), and (2) instance segmentation with tracking based on the YOLOv8l-seg architecture. We conducted experiments using the CholecSeg8k dataset and our proprietary set of neurosurgical videos (PSNV) from microscope. Our approach with tracking outperformed YOLOv8l-seg-based solutions and EndoViT model with no tracking on both CholecSeg8k (mean IoT = 0.8158, mean Dice = 0.8657) and PSNV (mean IoT = 0.7196, mean Dice = 0.8202) datasets. Our experiments with identifying neurosurgical instruments in a microscope's field of view showcase the high quality of these technologies and their potential for valuable applications. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index