You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster C77, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

When Do Words Get in the Way? An EEG Investigation of the Interaction between Talker and Linguistic Cues in Speech Processing

Philip Monahan1, Chandan Narayan2;1University of Toronto, 2York University

Talker- and linguistic-specific information have been shown to segregate relatively early in auditory cortical processing (Formisano, De Martino, Bonte, & Goebel, 2008). How and when these two information types interact is largely unknown. Recent behavioral findings suggest that a listener’s discrimination of two talkers each speaking a single monosyllabic word of English is affected by the lexical relatedness of the two words (Narayan, Mak, and Bialystok, 2016). Discrimination accuracy was significantly poorer when the same talker produced a sequence of words that were linguistically unrelated (e.g., “tooth”-“bread”) compared to when that sequence formed a compound (e.g., “tooth”-“paste”). Similarly, when different talkers produced words forming a compound, listeners’ discrimination of their voices was significantly worse compared to unrelated words. We present the results of an EEG study that suggest that listeners expect words spoken by the same talker to be linguistically related. Furthermore, this expectation occurs relatively early in the neurophysiological response during the sequential presentation of two words. Sixteen native speakers of English participated. Stimuli were 90 auditory pairs of words that were either a repetition (e.g., “tooth”-“tooth”), compound or unrelated (see above). Each pair was repeated four times (360 total trials) by two male native speakers of English (e.g., M1-M1, M2-M2, M1-M2, M2-M1). EEG recordings were acquired with a 32-channel system (Brain Products GmbH, Germany). For statistical analyses, the dependent variable was the mean amplitude of ERP waveform in four time-windows: P50 (40-100ms), N1 (100-175ms), P2 (175-250ms) and a late negativity (300-500ms). These values were aggregated over Frontal/Central electrode sites. The data was submitted to a maximal mixed effects model with the fixed effects of Talker (Same, Different) and Condition (Repeated, Compound, Unrelated). ERP results revealed patterns consistent with the behavioral findings in Narayan et al. (2016). Significant Condition by Talker interactions were observed in the N1, P2, and N4 time-windows. Pairwise interaction comparisons were performed to assess the effect of Condition in different Talker contexts. In particular, we found significant differences in the ERP responses in these three time windows between the Compound and Unrelated conditions when talkers were the Same (N1: X2(1)=8.09,p<0.05; P2: X2(1)=9.06,p<0.05; Late Negativity: X2(1)=9.66,p<0.01). In the late negativity time-window, we do not observe a difference between the Repeated and Compound conditions (X2(1)=0.38,p=0.54), while the Unrelated condition elicited are larger negativity relative to the Repeated (X2(1)=8.22,p<0.05) and Compound conditions (X2(1)=6.74,p<0.05). These results suggest that words spoken by the same talker are privileged with an expectancy of linguistic relatedness. Unrelated words spoken by the same talker elicited a significantly larger potential in the three time windows relative to compound words, and in particular, the Unrelated condition elicited a larger negativity in the Late Negativity window compared with the other two conditions. Finally, ERPs reveal that linguistic processing is integrated with the acoustic characteristics of talkers and occurs on a very short time scale, as differences are observed in middle-latency auditory ERP responses, from approximately 100ms post-stimulus onset.

Topic Area: Perception: Speech Perception and Audiovisual Integration

Back to Poster Schedule