Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Speech prosody serves temporal prediction of language in the frontal cortex via neural entrainment in the auditory cortex

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C35 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Yulia Lamekina1, Lorenzo Titone1, Burkhard Maess1, Lars Meyer1,2; 1Max Planck Institute for Human Cognitive and Brain Sciences, 2Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Germany

INTRODUCTION: Temporal prediction assists language comprehension. In a series of recent behavioral studies, we recently showed that listeners employ rhythmic prosodic modulations to predict the duration of upcoming sentences, speeding up comprehension. At last year’s SNL conference, we presented preliminary MEG evidence that this effect is driven by neural oscillations at delta-band frequency (i.e., < 4 Hz). Delta-band oscillations had been previously known to synchronize with speech prosody. Our preliminary analyses showed that oscillations do not only synchronize with prosody during stimulation, but carry prosodic rhythms into the future to serve downstream temporal prediction. In the current submission, we provide the final sensor space MEG analyses of this effect. More importantly, we also present new source-level MEG analyses to address the underlying functional neuroanatomy. Prior work has shown that synchronization to prosody relies on right auditory regions, whereas prediction in general is mostly associated with left frontal areas. The current results demonstrate that the shift from synchronization to prediction indeed associates with a shift from auditory to frontal areas. METHODS: Our study combined an initial repetitive prosodic rhythm (entrainment phase) with a subsequent visual sentence presentation (target phase). We used two prosodic contours (slow and fast), the duration of which either matched or mismatched the duration of an upcoming visual sentence. In the entrainment part of each experimental trial, a contour was repeated 3 times to induce rhythmic entrainment. In the target part, the target sentence was presented word by word; presentation was duration-matched to the rate of the previous prosodic stimulus. We first hypothesized that delta-band oscillations would entrain to the rate of the prosodic contours in the right hemisphere. Second, we expected that brain activity at the frequency of the preceding contour would still be detectable in the MEG signal recorded during the visual target sentence, predominantly in the left hemisphere. Third, we expected an error response when the duration of the target sentence mismatched the duration of the entraining contours. RESULTS: During the entrainment phase, we observed sensor-space MEG coherence with the contours at the stimulation rate (p < 0.001, corrected), which was source-localized to right-hemispheric auditory areas. During the target phase, activity at the frequency of the preceding contour was still detectable in the MEG (p < 0.001, corrected); strikingly, sources shifted to the left frontal cortex. Critically, when the target sentence was shorter than expected from the preceding contour, an M300 ERF was observed at the offset of the short sentence—likely indicating an omission response under the expectation of a long sentence. CONCLUSION: We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. The entrained oscillations appear to shift from right bottom-up (= auditory) regions in the entrainment phase to left top-down (= predictive) regions in the target phase. The conserved prosodic frequency determines the temporal prediction of the duration of upcoming stimuli. The mechanism of prosodic entrainment could potentially be used as a facilitatory means in dialogue, enhancing mutual comprehension.

Topic Areas: Perception: Auditory, Signed Language and Gesture

SNL Account Login

Forgot Password?
Create an Account

News