Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Temporal prediction and the neural tracking of linguistic structures

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster D32 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port

Jordi Martorell1,2,3,4, Nicola Molinaro1,5, Giovanni Di Liberto3,4,6, Lars Meyer2,7; 1Basque Center on Cognition, Brain and Language (BCBL), 2Max Planck Institute for Human Cognitive and Brain Sciences, 3Trinity College, The University of Dublin, 4ADAPT Centre, Trinity College, The University of Dublin, 5Ikerbasque, Basque Foundation for Science, Bilbao, Spain, 6Trinity College Institute of Neuroscience, Trinity College, The University of Dublin, 7Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Germany

Neurophysiological evidence suggests that brain activity tracks linguistic structures during speech comprehension at multiple timescales. Using the frequency-tagging paradigm, it has been shown that low-frequency neural responses synchronize their phase to the occurrence of acoustic (syllables) and syntactic (phrases and sentences) information (Ding et al., 2016). Unfortunately, this paradigm relies on the periodic (time-fixed) presentation of continuous speech streams. Hence, it conflates the temporal prediction of both acoustic and syntactic events. Clarifying the impact of temporal prediction is fundamental to understand whether similar neural mechanisms support the tracking of different types of linguistic information. To address this, our magnetoencephalography (MEG) experiment assesses the role of temporal prediction in the neural tracking of linguistic structures. We develop a novel version of the frequency-tagging paradigm that selectively manipulates the temporal predictability of acoustic and syntactic events. Our stimuli consist of acoustic streams (synthesized speech originally preserving natural time-varying durations) composed by 10 consecutive simple sentences (4 bi-syllabic words: adjective-noun-verb-noun) in German. Temporal predictability is accomplished by imposing periodicity (i.e., selectively matching the duration) at relevant boundaries progressively across linguistic levels (syllables/phrases/sentences). This manipulation results in 4 conditions, going from predictable (periodic, time-fixed) syllables-phrases-sentences to unpredictable (aperiodic, time-varying) syllables-phrases-sentences. Importantly, the speech envelope power spectrum only shows modulations at the frequency of syllables (4 Hz, progressively reduced in more aperiodic conditions). German speakers (n = 30) listen to our stimuli while performing a sentence-recognition task. To estimate neural tracking in our sensor-level MEG data, we analyze the non-uniformity of frequency-specific instantaneous phase angles (i.e., phase synchronization) at corresponding linguistic boundaries per participant, using the Rayleigh test. Our results show that phase synchronization to syllable boundaries is significantly stronger for predictable than unpredictable events, showing a right-hemisphere lateralization only when syllables are predictable. For phrase boundaries, phase synchronization is also significantly stronger for predictable than unpredictable events, remaining unaffected by the simultaneous presence or absence of acoustic predictability. However, phase synchronization to sentence boundaries is comparable across conditions, suggesting that neither acoustic nor syntactic predictability impacts sentence-level tracking. Importantly, unlike acoustic tracking, syntactic (both phrase- and sentence-level) tracking effects display a left-hemisphere lateralization. Taken together, our results seem to point out a dissociation between the neural tracking of (un-)predictable acoustic and syntactic boundaries. First, predictable syntactic events are tracked similarly, independently of the predictability of acoustic information, possibly suggesting non-identical neural mechanisms across linguistic levels. Second, in line with this possibility, the asymmetric hemispheric patterns indicate that acoustic and syntactic tracking are subserved by distinct neural substrates. Third, the finding of tracking effects for sentences (but not phrases) in unpredictable contexts suggests that temporal prediction mechanisms might be more robust for time-varying syntactic events with longer durations. To conclude, by implementing a novel frequency-tagging paradigm, we provide new evidence for the differential impact of temporal prediction mechanisms in the neural tracking of acoustic and syntactic events. Our results reveal a dissociation between the simultaneous tracking of acoustic and syntactic structures.

Topic Areas: Syntax and Combinatorial Semantics, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News