Additional file 1 of Atypical gaze patterns in autistic adults are heterogeneous across but reliable within individuals

Autor: Keles, Umit, Kliemann, Dorit, Byrge, Lisa, Saarimäki, Heini, Paul, Lynn K., Kennedy, Daniel P., Adolphs, Ralph
Rok vydání: 2022
DOI: 10.6084/m9.figshare.21206606
Popis: Additional file1. Figure S1: Automatic segmentation of frames to areas of interest (AOIs). A Because of the copyright restrictions of the sitcom “The Office”, the visualization is shown by using a sample royalty free image (by Alena Darmel from Pexels.com). B Automatic segmentation of video frames to detect regions depicting human body parts, including head (yellow), hands (pink), and other body parts (blue). Remaining non-shaded areas (i.e., black areas in panel D) were taken as non-social context. C Automatic segmentation of frames to detect regions depicting human faces and estimation of five facial keypoints, including two eyes, nose, and two sides of the mouth. These keypoints were used to define eyes (orange) and mouth (turquoise) areas within each frame. D Segmentation results from panels B and C are combined. E A sample gaze point is shown as a disk of diameter of 1-degree visual angle. F The gaze point was combined with AOIs to determine the gaze was onto which AOI. Table S1: Effect size of the differences (quantified with Cohen’s d) between groups in their percentage of total gaze time on-screen, faces, eyes, and in their average heatmap correlation with TD reference gaze heatmaps. Cohen’s d between the groups (TD-ASD) were computed within randomly sampled epochs of videos (duration given in rows, see Methods) for 10,000 iterations and then averaged across the iterations. Values in parentheses show the statistical significance of effect size (bootstrap test, FDR corrected for multiple comparisons within each epoch duration). This table complements Fig. 1C, D using a sampling procedure that examines the effect of reducing data size on estimated group differences. Asterisk denotes p < 0.001. Table S2: Spearman’s correlation among individuals within a group in their gaze time to various AOIs and in their average gaze heatmap correlation with TD reference heatmaps. Correlation values were computed between two randomly sampled epochs of videos (duration given in columns) for 10,000 sampling iterations and then averaged across the iterations. Values in parentheses show the statistical significance of the correlation (bootstrap test, FDR corrected for multiple comparisons within each epoch duration). This table complements analyses provided in Fig. 2C, F, I, L for different duration sampling epochs. Asterisk denotes p < 0.001. Table S3: In the first row (“ASD - Gaze to faces”), Corr(XEpA, XEpB) reports Spearman’s correlation (and its statistical significance, asterisk denoting p < 0.001) between data from two separate videos (Episode A and B) for the percentage of gaze time to faces for the ASD group (as shown in Fig. 2A). Corr(XEpA, On-ScreenEpA) reports the correlation between on-screen and face gaze times in Episode A for the group. ParCorr(XEpA, XEpB) reports the residual (partial) correlation between gaze times to faces in Episode A and B after partialing out the effect of on-screen gaze time from gaze time to faces separately in each episode. Other rows in the table repeat this analysis for other gaze features and TD group. Table S4: Spearman’s correlation (and its statistical significance, uncorrected) between four gaze features (percentage of on-screen, face- and eye-looking time, and heatmap correlations with TD reference heatmaps) and an autism severity measure (calibrated severity scores, CSS-Overall, which were generated from the Hus and Lord algorithm; see main text) in Episode A (EpA) and Episode B (EpB). Table S5: Effect of familiarity with an episode on gaze features. A 9-point questionnaire (in a range from 1 to 9) about prior familiarity with each episode was used to split participants into those who were familiar (ratings > 5) or unfamiliar (ratings < 5) with an episode. The calculated t-statistic (and its p value) for the means of familiar and unfamiliar participants in the ASD group, in the TD group, and across all participants independent of ASD diagnosis for their four gaze features (percentage of on-screen, face- and eye-looking time, or average heatmap correlations with TD reference heatmaps). T tests were two-tailed and unpaired, assuming equal variance. For Episode A (EpA), 17 (24) autistic individuals and 50 (47) TD controls were familiar (unfamiliar) with the episode. For Episode B (EpB), 13 (32) autistic individuals and 39 (52) TD controls were familiar (unfamiliar) with the episode. Table S6: The change in fingerprinting identification accuracy as a function of sampling epoch duration. The fingerprinting analysis was performed either using eight gaze features (the percentage of time spent looking at screen, faces, non-social content, non-head body parts, hands, eyes, mouth, and heatmap correlations with TD reference heatmaps) or four features (percentage of on-screen, face- and eye-looking time, heatmap correlations). Accuracy values were computed using two randomly sampled epochs of videos (duration given in columns) for 10,000 times and then averaged across the iterations. Values in parentheses show the statistical significance of accuracy (bootstrap test, FDR corrected for multiple comparisons within each epoch duration). Asterisk denotes p < 0.001.
Databáze: OpenAIRE