You are viewing the SNL 2018 Archive Website. For the latest information, see the Current Website.

Poster A63, Thursday, August 16, 10:15 am – 12:00 pm, Room 2000AB

The impoverished comprehension of non-native speech in noise

Esti Blanco-Elorrieta1, Nai Ding2, Liina Pylkkänen1,3, David Poeppel1,4;1New York University, 2Zhejiang University, 3NYUAD Institute, 4Max Planck Institute

INTRODUCTION. There is strong evidence that under unfavorable listening conditions (e.g. noise) bilinguals have a deficit in the comprehension of their second language (L2) compared to their first language, despite performing similarly in quiet listening conditions. For example, if bilingual speakers enter a noisy party, they will be able to follow the conversation if their interlocutor is speaking in their native language. However, they will have trouble understanding what is being said if their interlocutor speaks in their second language. Although the prevalence of this phenomenon has been previously established (Florentine et al., 1984; Bradlow & Bent, 2002; Garcia-Lecumberri & Cooke, 2006; Rogers et al., 2006), the causes and neurobiological bases of this dissociation have not been elucidated. In this study we used neural entrainment to different linguistic levels of representation to investigate the bases of the impoverished comprehension of L2 speech in noise. METHODS. We collected MEG data from 40 Chinese-English bilinguals. Participants listened to four-syllable isochronous sentences in English and in Chinese (cf. Ding, Melloni, Zhang, Tian & Poeppel 2016) at 4 different levels of noise, ranging from completely clear speech (15 dB) to unintelligible speech in a noisy background (-15 dB), in 7.5 dB intervals. The sentences consisted of 4 monosyllabic words that were combined to form a two-word noun phrase (adjective + noun) and a two-word verb phrase (verb + noun). The combination of these two phrases resulted in a four-word sentence (e.g., “big rocks block roads”). In order to meaningfully characterize the effect of noise across varied L2 proficiency levels, we tested bilinguals who were native speakers of Chinese with low level of English (16), with high level of English (12), and native speakers of Chinese who were currently English dominant (born to Chinese parents in the US; 12). RESULTS. Behavioral results show distinct psychometric curves for the comprehension of L1 and L2 speech, which vary by the language profile of the tested group. MEG results reveal two distinct phenomena: i) the tracking of syllabic rhythm decreases linearly as noise increases; and ii) the tracking of higher level phrasal structure is non-linearly disrupted by noise, as shown by a) the complete lack of entrainment to phrases at the highest level of noise (-15 dB) regardless of language profile and b) only native speakers (but not L2 speakers) tracking phrasal structures at -7.5 dB. CONCLUSION. This study quantifies the influence of noise in the cortical tracking of linguistic structures in connected speech, and provides evidence to suggest that -7.5 dB may be the threshold level of noise at which L2 comprehension is disrupted. Previous research has posited that greater availability of higher-level top-down linguistic information may account for this difference in L1 vs. L2 comprehension. The present study shows that a more automatic lower-level phenomenon, oscillatory tracking of speech, may also underlie the prevalent effect of impoverished comprehension of L2 speech in noise.

Topic Area: Multilingualism