Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Cortical tracking of prosodic and statistical regularities in artificial speech

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster B77 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Lorenzo Titone1, Burkhard Maess1, Lars Meyer1,2; 1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, 2Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, Germany

During language acquisition, extracting statistical regularities is crucial for speech segmentation and word learning. Recent work suggests an involvement of rhythmic brain activity in the learning of abstract statistical regularities. On the other hand, rhythmic activity has also been known to track acoustic patterns. Specifically, activity in the delta band has been found to track both prosodic and statistical cues. It remains unclear whether distinct neural circuits are involved in the tracking of prosodic and statistical rhythms and how they jointly impact learning. We employed an artificial lexicon of syllables in a frequency-tagging MEG experiment, and used a 2-by-2 design to orthogonally manipulate prosody (flat versus rhythmic) and syllable transitional probabilities (TPs; TP-uniform versus TP-rhythmic). Syllables were concatenated into isochronous streams (3.3 Hz), where low-TP events and/or prosodic boundaries delineated tri-syllabic chunks; in rhythmic conditions both types of events would occur isochronously as well (1.1 Hz). Each condition includes a learning phase and a test phase. In the learning phase, we exposed subjects to the streams. In the test phase, we presented pairs of trisyllabic artificial chunks and part-chunks (crossing over a chunk boundary) followed by a two-alternative forced choice (2-AFC) task to assess explicit learning. We collected behavioral and MEG data from 30 participants. Behavioral analyses were performed with binomial linear mixed effect models. We found a main effect of prosody, no effect of structure, nor an interaction between these factors. Post-hoc pairwise comparisons indicate learning in both TP-rhythmic and TP-uniform conditions when prosody was rhythmic, but not when prosody was flat. Since rhythmic prosody and TPs are known to constitute strong cues for chunking, we additionally investigated carry-over effects throughout the experiment depending on the presentation order of the four conditions, which was counterbalanced across participants. A model including a three-way interaction between the factors TP, prosody, and condition order as predictors showed improved fit compared to reduced models. Interestingly, we here found a greater facilitatory effect of prosody on learning when the TP-rhythmic condition with rhythmic prosody was presented in the first block, but not when it was presented in the last block. While it has been debated whether offline behavioral tasks are suitable to assess statistical learning in artificial language studies, neural tracking at the chunk rate has been considered a more sensitive metric. We are currently conducting the MEG analyses, and we are intending to present our results at the conference. In the learning phase, we expect high inter-trial phase coherence (ITPC) at the syllable rate (3.3 Hz) in all conditions, particularly in the auditory cortices. Additionally, we expect high ITPC at the chunk rate (1.1 Hz) in the right superior temporal gyrus for prosody conditions and in left fronto-temporal regions for TP-rhythmic conditions. In the test phase, we expect decreased M400 amplitudes in response to artificial chunks, relative to part-chunks, which would suggest reduced processing efforts for learned lexical items. This study can show that rhythmic processing of TPs and prosody is neurally dissociable and jointly impacts word learning.

Topic Areas: Speech Perception, Language Development/Acquisition

SNL Account Login

Forgot Password?
Create an Account

News