Volumetric memory network for interactive medical image segmentation.
Autor: | Zhou T; Computer Vision Laboratory, ETH Zurich, Switzerland. Electronic address: tianfei.zhou@vision.ee.ethz.ch., Li L; School of Computer Science and Technology, Beijing Institute of Technology, China., Bredell G; Computer Vision Laboratory, ETH Zurich, Switzerland., Li J; School of Computer Science and Technology, Beijing Institute of Technology, China., Unkelbach J; Department of Radiation Oncology, University Hospital of Zurich, Zurich, Switzerland., Konukoglu E; Computer Vision Laboratory, ETH Zurich, Switzerland. |
---|---|
Jazyk: | angličtina |
Zdroj: | Medical image analysis [Med Image Anal] 2023 Jan; Vol. 83, pp. 102599. Date of Electronic Publication: 2022 Sep 06. |
DOI: | 10.1016/j.media.2022.102599 |
Abstrakt: | Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. (Copyright © 2022 The Authors. Published by Elsevier B.V. All rights reserved.) |
Databáze: | MEDLINE |
Externí odkaz: |