Autor: |
Rodina Bassiouny, Adel Mohamed, Karthi Umapathy, Naimul Khan |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
IEEE Journal of Translational Engineering in Health and Medicine, Vol 12, Pp 119-128 (2024) |
Druh dokumentu: |
article |
ISSN: |
2168-2372 |
DOI: |
10.1109/JTEHM.2023.3327424 |
Popis: |
The objective of this study was to develop an interpretable system that could detect specific lung features in neonates. A challenging aspect of this work was that normal lungs showed the same visual features (as that of Pneumothorax (PTX)). M-mode is typically necessary to differentiate between the two cases, but its generation in clinics is time-consuming and requires expertise for interpretation, which remains limited. Therefore, our system automates M-mode generation by extracting Regions of Interest (ROIs) without human in the loop. Object detection models such as faster Region Based Convolutional Neural Network (fRCNN) and RetinaNet models were employed to detect seven common Lung Ultrasound (LUS) features. fRCNN predictions were then stored and further used to generate M-modes. Beyond static feature extraction, we used a Hough transform based statistical method to detect “lung sliding” in these M-modes. Results showed that fRCNN achieved a greater mean Average Precision (mAP) of 86.57% (Intersection-over-Union (IoU) = 0.2) than RetinaNet, which only displayed a mAP of 61.15%. The calculated accuracy for the generated RoIs was 97.59% for Normal videos and 96.37% for PTX videos. Using this system, we successfully classified 5 PTX and 6 Normal video cases with 100% accuracy. Automating the process of detecting seven prominent LUS features addresses the time-consuming manual evaluation of Lung ultrasound in a fast paced environment. Clinical impact: Our research work provides a significant clinical impact as it provides a more accurate and efficient method for diagnosing lung diseases in neonates. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|