Evaluation of an automated fetal myocardial performance index

Autor: Neama Meriki, David W. Chang, Simcha Yagel, Jingjing Wang, Alec W. Welsh, Fatima Crispi, Amanda Henry, Edgar Hernandez-Andrade, P. Maheshwari, Stephen J. Redmond, Helena M. Gardiner
Rok vydání: 2016
Předmět:
Zdroj: Ultrasound in Obstetrics & Gynecology. 48:496-503
ISSN: 0960-7692
DOI: 10.1002/uog.15770
Popis: OBJECTIVE To compare automated measurements of the fetal left myocardial performance index (MPI) with manual measurements for absolute value, repeatability and waveform acceptability. METHODS This was a multicenter international online study using images from uncomplicated, morphologically normal singleton pregnancies (16-38 weeks' gestation). Single Doppler ultrasound cardiac cycle images of 25 cases were selected, triplicated and randomized (n = 75). Six senior observers, unaware of the repetition of images, manually calculated MPI for each waveform and the results were compared with automation. Intraobserver repeatability and interobserver reproducibility were assessed using intraclass correlation coefficients (ICCs) and 95% CI. The agreement between each observer's manual MPI measurements and corresponding automated measurements was evaluated using Bland-Altman plots and ICCs with 95% CI. The degree of variation between experts in the classification of fetal MPI waveform quality was assessed using individual cardiac cycle left MPI images previously classified by two authors as 'optimal', 'suboptimal' or 'unacceptable', with 30 images selected for each quality group. Ten images in each category were duplicated and the resulting 120 images were randomized and then classified online by five observers. The kappa statistic (κ) was used to demonstrate interobserver and intraobserver agreement and agreement of classifications by the five observers. RESULTS The automated measurement software returned the same value for any given image, resulting in an ICC of 1.00. Manual measurements had intraobserver repeatability ICC values ranging from 0.69 to 0.97, and the interobserver reproducibility ICC was 0.78. Comparison of automated vs manual MPI absolute measurements for each observer gave ICCs ranging from 0.77 to 0.96. Interobserver image quality classification agreement gave k = 0.69 (P
Databáze: OpenAIRE