My Account

Poster C64, Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall

Neural processing of speech-in-noise in autism spectrum disorder

Stefanie Schelinski1,2, Katharina von Kriegstein1,2;1Technische Universität Dresden, 2Max Planck Institute for Human Cognitive and Brain Sciences

Introduction: Recognising what another person is saying under noisy conditions (i.e., speech-in-noise perception) is an everyday challenging experience. There is evidence that speech-in-noise perception is restricted in people with an autism spectrum disorder (ASD) (Alcantara et al., JChildPsycholPsychiatry, 2004; Schelinski & von Kriegstein, under review). However, the underlying neural mechanisms of this speech perception difficulty are unclear. A recent meta-analysis showed that three cerebral cortex regions are particularly involved in speech-in-noise processing (Alain et al., HBM, 2018). Here we tested, whether atypical responses in these brain regions might explain speech-in-noise perception difficulties in ASD. Methods: 17 adults with ASD (Mage = 30,53 years; 14 males) and 17 typically developing adults (matched pairwise on age, sex, handedness, and full-scale intelligence quotient (IQ)) performed an auditory-only speech recognition task during functional magnetic resonance imaging (fMRI). All participants had normal hearing (confirmed with pure tone audiometry) and did not take psychotropic medication. Participants in the ASD group had previously received a formal clinical diagnosis and underwent additional clinical assessment including the ADOS and ADI-R (Lord et al., JADD, 1994, 2000). During the fMRI experiment, we presented blocks of sentences that were either presented with or without noise (noise / no noise condition). Sentences were semantically neutral and phonologically and syntactically homogenous. In the noise condition, sentences were presented together with pink noise (signal-to-noise ratio = -8). In both conditions, the first sentence of a block was the target sentence and participants decided whether the content of the following sentences matched the content of the target sentence. Both conditions included the same set of sentences. All sentences were spoken by six male speakers. Before the fMRI session, participants were familiarised with voices of all speakers together with their face during an audio-visual training phase. For the fMRI analysis, we used a general linear model implemented in SPM12. Results: Both groups showed typical speech-sensitive blood-oxygenation-level-dependent (BOLD) responses for the no noise condition including bilateral superior and middle temporal sulcus, inferior parietal and inferior frontal brain regions (p < .05 family wise error (FWE) corrected for the whole brain; e.g., Friederici, TiCS, 2012). For recognising speech in the noise as compared to the no noise condition, we found higher BOLD responses in the control as compared to the ASD group in the left inferior frontal gyrus (left IFG), whereas both groups showed similar responses for the two other regions that are particularly involved in speech-in-noise processing (i.e., right insula and left inferior parietal lobule; p < .05 FWE corrected for the three regions of interest). An ANOVA revealed that there were no significant group differences in the speech recognition performances for none of the conditions (for p < .05). The ASD and the control group did not differ significantly in the average amount of head movements (for p < .05). Conclusion: Our findings suggested that in ASD the processing of speech is particularly reduced in the left IFG under noisy conditions. These differences might be important in explaining restricted speech comprehension in noisy environments in ASD.

Themes: Speech Perception, Disorders: Developmental
Method: Functional Imaging

Back