Automatic annotation of surgical activities using virtual reality environments

Autor: Arnaud Huaulmé, Pierre Jannin, Fabien Despinoy, Kanako Harada, Saul Alexis Heredia Perez, Mamoru Mitsuishi
Přispěvatelé: Laboratoire Traitement du Signal et de l'Image (LTSI), Université de Rennes (UR)-Institut National de la Santé et de la Recherche Médicale (INSERM), The University of Tokyo (UTokyo), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de la Santé et de la Recherche Médicale (INSERM)
Rok vydání: 2019
Předmět:
Models
Anatomic

Operating Rooms
Process modeling
Situation awareness
Computer science
Process (engineering)
Automatic annotation
0206 medical engineering
Biomedical Engineering
Health Informatics
02 engineering and technology
Virtual reality
Health informatics
Bottleneck
030218 nuclear medicine & medical imaging
Task (project management)
Machine Learning
03 medical and health sciences
Annotation
0302 clinical medicine
Surgical simulation
Human–computer interaction
Humans
Radiology
Nuclear Medicine and imaging

business.industry
Virtual Reality
Reproducibility of Results
General Medicine
020601 biomedical engineering
Computer Graphics and Computer-Aided Design
Computer Science Applications
Surgical process model
Surgery
Computer-Assisted

[SDV.IB]Life Sciences [q-bio]/Bioengineering
Surgery
Computer Vision and Pattern Recognition
business
[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing
Zdroj: International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery, 2019, 14 (10), pp.1663-1671. ⟨10.1007/s11548-019-02008-x⟩
International Journal of Computer Assisted Radiology and Surgery, Springer Verlag, 2019, 14 (10), pp.1663-1671. ⟨10.1007/s11548-019-02008-x⟩
ISSN: 1861-6429
1861-6410
DOI: 10.1007/s11548-019-02008-x⟩
Popis: International audience; Purpose - Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. Methods - Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. Validation - We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. Results and conclusion - In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.
Databáze: OpenAIRE