Deep learning prediction of error and skill in robotic prostatectomy suturing

Autor: Sirajudeen, N., Boal, M., Anastasiou, D., Xu, J., Stoyanov, D., Kelly, J., Collins, J. W., Sridhar, A., Mazomenos, E., Francis, N. K.
Zdroj: Surgical Endoscopy; 20240101, Issue: Preprints p1-9, 9p
Abstrakt: Background: Manual objective assessment of skill and errors in minimally invasive surgery have been validated with correlation to surgical expertise and patient outcomes. However, assessment and error annotation can be subjective and are time-consuming processes, often precluding their use. Recent years have seen the development of artificial intelligence models to work towards automating the process to allow reduction of errors and truly objective assessment. This study aimed to validate surgical skill rating and error annotations in suturing gestures to inform the development and evaluation of AI models. Methods: SAR-RARP50 open data set was blindly, independently annotated at the gesture level in Robotic-Assisted Radical Prostatectomy (RARP) suturing. Manual objective assessment tools and error annotation methodology, Objective Clinical Human Reliability Analysis (OCHRA), were used as ground truth to train and test vision-based deep learning methods to estimate skill and errors. Analysis included descriptive statistics plus tool validity and reliability. Results: Fifty-four RARP videos (266 min) were analysed. Strong/excellent inter-rater reliability (range r= 0.70–0.89, p< 0.001) and very strong correlation (r= 0.92, p< 0.001) between objective assessment tools was demonstrated. Skill estimation of OSATS and M-GEARS had a Spearman’s Correlation Coefficient 0.37 and 0.36, respectively, with normalised mean absolute error representing a prediction error of 17.92% (inverted “accuracy” 82.08%) and 20.6% (inverted “accuracy” 79.4%) respectively. The best performing models in error prediction achieved mean absolute precision of 37.14%, area under the curve 65.10% and Macro-F1 58.97%. Conclusions: This is the first study to employ detailed error detection methodology and deep learning models within real robotic surgical video. This benchmark evaluation of AI models sets a foundation and promising approach for future advancements in automated technical skill assessment. Graphical abstract:
Databáze: Supplemental Index