Advancing epilepsy diagnosis: A meta-analysis of artificial intelligence approaches for interictal epileptiform discharge detection.

Autor: Borges Camargo Diniz, Jordana, Silva Santana, Laís, Leite, Marianna, Silva Santana, João Lucas, Magalhães Costa, Sarah Isabela, Martins Castro, Luiz Henrique, Mota Telles, João Paulo
Zdroj: Seizure; Nov2024, Vol. 122, p80-86, 7p
Abstrakt: • The first meta-analysis evaluating AI's diagnostic performance in detecting IED. • A minority of models validate their performance on external datasets. • Models validated with resampling methods outperformed those using external datasets. • Creating well-defined, multi-centric prospective labeled datasets is a priority. Interictal epileptiform discharges (IEDs) in electroencephalograms (EEGs) are an important biomarker for epilepsy. Currently, the gold standard for IED detection is the visual analysis performed by experts. However, this process is expert-biased, and time-consuming. Developing fast, accurate, and robust detection methods for IEDs based on EEG may facilitate epilepsy diagnosis. We aim to assess the performance of deep learning (DL) and classic machine learning (ML) algorithms in classifying EEG segments into IED and non-IED categories, as well as distinguishing whether the entire EEG contains IED or not. We systematically searched PubMed, Embase, and Web of Science following PRISMA guidelines. We excluded studies that only performed the detection of IEDs instead of binary segment classification. Risk of Bias was evaluated with Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Meta-analysis with the overall area under the Summary Receiver Operating Characteristic (SROC), sensitivity, and specificity as effect measures, was performed with R software. A total of 23 studies, comprising 3,629 patients, were eligible for synthesis. Eighteen models performed discharge-level classification, and 6 whole-EEG classification. For the IED-level classification, 3 models were validated in an external dataset with more than 50 patients and achieved a sensitivity of 84.9 % (95 % CI: 82.3–87.2) and a specificity of 68.7 % (95 % CI: 7.9–98.2). Five studies reported model performance using both internal validation (cross-validation) and external datasets. The meta-analysis revealed higher performance for internal validation, with 90.4 % sensitivity and 99.6 % specificity, compared to external validation, which showed 78.1 % sensitivity and 80.1 % specificity. Meta-analysis showed higher performance for models validated with resampling methods compared to those using external datasets. Only a minority of models use more robust validation techniques, which often leads to overfitting. [ABSTRACT FROM AUTHOR]
Databáze: Supplemental Index