Slide Slam

< Slide Slam Sessions

Slide Slam I1

Investigating listening effort using neural speech tracking and alpha oscillations: the effects of acoustic distortion and language of materials

Slide Slam Session I, Wednesday, October 6, 2021, 5:30 - 7:30 pm PDT Log In to set Timezone

Jieun Song1,2, Paul Iverson2; 1Korea Advanced Institute of Science and Technology, 2University College London

Previous research has suggested that background noise or acoustic distortion (e.g., noise vocoding) can disrupt neural tracking of speech in the auditory cortex (e.g., Peelle et al., 2013). In contrast, our previous work has demonstrated that challenging listening conditions increase listening effort, thereby enhancing neural tracking (Song & Iverson, 2018; Song et al., 2020); this was found in non-native listeners compared to native listeners. The aim of the present study was to clarify the inconsistent findings in the following ways: we used acoustic distortion that preserves F0 as well as the broad-band amplitude envelope; noise vocoding used in previous research eliminates periodicity. Moreover, we examined within-subject differences in neural tracking for native versus non-native languages, as well as differences between native and non-native listeners. Lexical processing was also measured using N400 in order to examine speech processing at multiple stages, along with alpha power, which is thought to be an index of listening effort (e.g., Obleser et al., 2012). Electroencephalogram (EEG) recordings were made from native Korean speakers of English when listening to English and Korean sentences; native English speakers participated in the English part of the experiment. There was a competing-talker background in all conditions, but the target was either normal or was adaptively distorted in a way that reduced spectral detail of the target speech while preserving both F0 and the broad-band amplitude envelope. The results demonstrate that acoustic distortion increases neural tracking of speech acoustics in early auditory-cortical processing, likely due to increased listening effort. The degree of neural tracking was likewise higher for L2 than for L1 speech, whereas listeners had decreased lexical processing (i.e., smaller context-related differences in N400) for L2 than for L1. These language differences were found within individuals (i.e., Korean vs. English sentences) as well as between listener groups. Preliminary results of a time frequency analysis showed greater alpha power when listening to L2 (within-subject differences), confirming that listeners deploy greater attentional resources in that condition. However, alpha power increased with increasing signal-to-noise ratios. Our findings demonstrate how increased listening effort affects auditory and lexical processing, but further investigation is needed to fully understand the results of the time frequency analysis.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News