Participants were seated in a dimly lit room in front of a LCD 24-in. computer screen, which was placed at eye level. To guarantee attentiveness of the participants, an orthogonal task was implemented. A fixation cross, presented either on the nasion of the face or on the mouth, briefly (300 ms) changed color from black to red 10 times within every sequence. The participants had to respond as soon and accurately as possible when noticing the color changes of the fixation cross.
Behavioral Facial Expression Measures
Two computerized behavioral facial expression processing tasks were administered.
The Emotion Recognition Task (Kessels et al. 2014
; Montagne et al. 2007
) investigates the explicit recognition of six dynamic basic facial expressions. Similar to the study of Evers et al. (2015
), we applied two levels of emotion intensity: 50% and 100%. Children observe short video clips of a dynamic face in front view (4 clips per emotion), and have to select the corresponding emotion from the six written labels displayed left on the screen. Prior to task administration, participants were asked to provide an example situation for each emotion to ensure that they understood the emotion labels.
In the Emotion-matching task (Palermo et al. 2013
) participants have to detect a target face showing a different facial emotion compared to two distractor faces both showing the same expression. The same six emotions as in the Emotion Recognition Task are involved. Here, we used the shorter 65-item version of the task, preceded by four practice trials (for specifics, see Palermo et al. 2013
For the statistical group-level analyses of the baseline-corrected amplitudes, we applied a linear mixed-model ANOVA (function ‘lmer’ (package ‘lme4’) in R (Bates et al. 2015
)), fitted with restricted maximum likelihood. Separate models were fitted with either the base or the oddball rate response as the dependent variable. Fixation (eyes vs. mouth), orientation (upright vs. inverted faces) and ROI (LOT, ROT, MO) were added as fixed within-subject factors, and group (ASD vs. TD) as a fixed between-subject factor. To account for the repeated testing, we included a random intercept per participant. Degrees of freedom were calculated using the Kenward–Roger method. Planned posthoc contrasts were tested for significance using a Bonferroni correction for multiple comparisons, by multiplying the p-values by the number of comparisons.
In addition to the group-level analyses, we also evaluated the significance of the fear detection response for each individual participant based on their z-scores. Responses were considered significant if the z-score in one of the three ROIs exceeded 1.64 (i.e. p < 0.05; one-tailed: signal > noise).
Subsequently, we applied a linear discriminant analysis (LDA) on the EEG data to classify individuals as either belonging to the ASD or TD group. We carried out a variable selection (‘gamboost’ function in R (Buehlmann et al. 2018
)) to identify the most informative predictors, resulting in 12 input vectors for the LDA model—i.e. the first four oddball harmonics for each of the three ROIs. We expect them to be highly correlated, however, these between-predictor correlations are handled by the LDA (Kuhn and Johnson 2013
). Before performing the LDA classification, assumptions were checked. A Henze-Zirklers test (α = 0.05) with supplementary Mardia’s skewness and kurtosis measures showed a multivariate normal distribution of the variables. A Box’s M-test (α = 0.05) revealed equal covariance matrices for both groups. In addition, we assessed the competence of the classification model to address the issues of small sample sizes and possible over-fitting by carrying out permutation tests (Noirhomme et al. 2014
For the behavioral data of the orthogonal task and the Emotion-matching task, the assumptions of normality and homoscedasticity were checked using a Shapiro–Wilk and Levene’s test, respectively. For normal distributions, an independent-samples T test was applied, otherwise, we performed a Mann–Whitney U test. When the assumption of homogeneity of variances was violated, degrees of freedom were corrected using the Welch-Sattertwaite method. For the Emotion Recognition Task, we applied a linear mixed-model ANOVA, with intensity level (50% vs. 100%) and expression (anger, fear, happiness, sadness, disgust, surprise) as fixed within-subject factors and group as between-subject factor. Again, we included a random intercept per participant.
All assumptions in terms of linearity, normality and constance of variance of residuals were verified and met for all linear mixed-model ANOVAs.
Due to equipment failure, data on the Emotion Recognition Task were missing for one TD participant. In addition, data of the Emotion-matching task were discarded for one TD participant because he did not follow the instructions and randomly pressed the buttons.
All analyses have been performed with and without inclusion of colorblind children, ASD children with comorbidities, and ASD children who take medication. As their inclusion/exclusion did not affect any results, we only report results with all participants included.