Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

The syllable frequency effect before and after speaking

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E4 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Julia Chauvet1,2, Sophie Slaats1, David Poeppel2,3, Antje Meyer1; 1Max Planck Institute for Psycholinguistics, 2Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, 3New York University

Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: selecting the concepts to be expressed, access to grammatical and morpho-phonological representations of words, phonetic and articulatory encoding of the words. In addition, speakers monitor their planned speech output using sensorimotor predictive mechanisms. The current work concerns the hypothesized phonetic encoding stage and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations. Potential effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations. To address this question, we run a series of immediate and delayed naming experiments in which adult native speakers of Dutch, on each trial, first read a (non-word) syllable (e.g., ‘kem’ or ‘kes’), prepare to produce it, and upon presentation of a production cue, say it. We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We first replicate the behavioural result in immediate naming experiments in a sample of 20 Dutch adult native speakers. In subsequent experiments with approximately 30 participants (sample size based on power analyses conducted after the first set of experiments), we also record participants' EEG. In the phonetic encoding stage (i.e., a time window of ~450 ms prior to articulation), we perform standard waveform analyses and spatio-temporal segmentations to assess qualitative and quantitative processing differences. In a time window of 200 ms following articulation onset, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. For the phonetic encoding window, we predict that the production of high-frequency vs. low-frequency syllables will result in distinct neurophysiological patterns, including diverging ERP waveform amplitudes and different global topographies. For the window following articulation, we predict more attenuated auditory-evoked N1 amplitudes for high compared to low frequency syllables – on the view that high-frequency syllables yield stronger predictions and therefore more attenuated N1/P2 responses. The production of low frequency syllables, putatively less automatized, is expected to require more close monitoring and therefore larger N1/P2 amplitudes and longer latencies. Likewise, spectrotemporal and decoding analyses of the EEG data are predicted to reflect the syllable frequency difference. The results can be important as they will allow us to assess more precisely the role of syllabic units for setting sensory goals in the planning and monitoring of speech.

Topic Areas: Language Production, Speech Motor Control

SNL Account Login

Forgot Password?
Create an Account

News