Simulation of Brain Resection for Cavity Segmentation Using Self-Supervised and Semi-Supervised Learning
Autor: | P��rez-Garc��a, Fernando, Rodionov, Roman, Alim-Marvasti, Ali, Sparks, Rachel, Duncan, John S., Ourselin, S��bastien |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Vision and Pattern Recognition (cs.CV) Image and Video Processing (eess.IV) education FOS: Electrical engineering electronic engineering information engineering Computer Science - Computer Vision and Pattern Recognition Electrical Engineering and Systems Science - Image and Video Processing |
Popis: | Resective surgery may be curative for drug-resistant focal epilepsy, but only 40% to 70% of patients achieve seizure freedom after surgery. Retrospective quantitative analysis could elucidate patterns in resected structures and patient outcomes to improve resective surgery. However, the resection cavity must first be segmented on the postoperative MR image. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large amounts of annotated data for training. Annotation of medical images is a time-consuming process requiring highly-trained raters, and often suffering from high inter-rater variability. Self-supervised learning can be used to generate training instances from unlabeled data. We developed an algorithm to simulate resections on preoperative MR images. We curated a new dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images from 431 patients who underwent resective surgery. In addition to EPISURG, we used three public datasets comprising 1813 preoperative MR images for training. We trained a 3D CNN on artificially resected images created on the fly during training, using images from 1) EPISURG, 2) public datasets and 3) both. To evaluate trained models, we calculate Dice score (DSC) between model segmentations and 200 manual annotations performed by three human raters. The model trained on data with manual annotations obtained a median (interquartile range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement between human annotators was 84.0 (9.9). We demonstrate a training method for CNNs using simulated resection cavities that can accurately segment real resection cavities, without manual annotations. 13 pages, 6 figures, accepted at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |