SSRNet: A Deep Learning Network via Spatial‐Based Super‐resolution Reconstruction for Cell Counting and Segmentation

Autor: Lijia Deng, Qinghua Zhou, Shuihua Wang, Yudong Zhang
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Advanced Intelligent Systems, Vol 5, Iss 10, Pp n/a-n/a (2023)
Druh dokumentu: article
ISSN: 2640-4567
DOI: 10.1002/aisy.202300185
Popis: Cell counting and segmentation are critical tasks in biology and medicine. The traditional methods for cell counting are labor‐intensive, time‐consuming, and prone to human errors. Recently, deep learning‐based cell counting methods have become a trend, including point‐based counting methods, such as cell detection and cell density prediction, and non‐point‐based counting, such as cell number regression prediction. However, the point‐based counting method heavily relies on well‐annotated datasets, which are scarce and difficult to obtain. On the other hand, nonpoint‐based counting is less interpretable. The task of cell counting by dividing it into two subtasks is approached: cell number prediction and cell distribution prediction. To accomplish this, a deep learning network for spatial‐based super‐resolution reconstruction (SSRNet) is proposed that predicts the cell count and segments the cell distribution contour. To effectively train the model, an optimized multitask loss function (OM loss) is proposed that coordinates the training of multiple tasks. In SSRNet, a spatial‐based super‐resolution fast upsampling module (SSR‐upsampling) is proposed for feature map enhancement and one‐step upsampling, which can enlarge the deep feature map by 32 times without blurring and achieves fine‐grained detail and fast processing. SSRNet uses an optimized encoder network. Compared with the classic U‐Net, SSRNet's running memory read and write consumption is only 1/10 of that of U‐Net, and the total number of multiply and add calculations is 1/20 of that of U‐Net. Compared with the traditional sampling method, SSR‐upsampling can complete the upsampling of the entire decoder stage at one time, reducing the complexity of the network and achieving better performance. Experiments demonstrate that the method achieves state‐of‐the‐art performance in cell counting and segmentation tasks. The method achieves nonpoint‐based counting, eliminating the need for exact position annotation of each cell in the image during training. As a result, it has demonstrated excellent performance on cell counting and segmentation tasks. The code is public on GitHub (https://github.com/Roin626/SSRnet).
Databáze: Directory of Open Access Journals