My Account

Poster A59, Tuesday, August 20, 2019, 10:15 am – 12:00 pm, Restaurant Hall

Time course and neural signature of speech phonetic planning as compared to non-speech motor planning

Monica Lancheros1, Anne-Lise Jouen1, Marina Laganaro1;1University of Geneva

The relationship between speech abilities and non-speech oral abilities has been discussed for many years given that they share the same anatomical structures. However, it is only recently that the speech motor control literature has investigated whether they share the same neural substrates (e.g. Salmelin & Sams, 2002; Price, 2009). A clear notion of the link between speech and non-speech systems is relevant for comprehending the human motor behavior in general and speech motor planning in particular, as well as to understand motor speech disorders. Here we capitalize on the comparison between the production of monosyllabic words (high frequency syllables), monosyllabic pseudo-words (low frequency syllables) and non-speech oral sequences to investigate the spatio-temporal dynamics of the “latest” stages of production processes, where the linguistic message is transformed into the corresponding articulated speech. The non-speech oral sequences were closely matched to the words and pseudo-words, in terms of acoustic and somatosensory targets. In order to separate linguistic encoding from speech encoding, we used a delayed speech and non-speech production task, where speakers prepare an utterance, but produce it overtly only after a cue appearing after a short delay -Experiment 1-. Additionally, to avoid the preparation of the phonetic speech plan within the delay, it was filled with an articulatory suppression task in half of the participants (Laganaro & Alario, 2006) -Experiment 2-. We compared, both behaviorally and on high density electroencephalographic (EEG) evoked potentials (ERPs), the production of those voluntary non-speech vocal tract gestures to the production of French syllables. Experiment 1, on one hand, revealed significant differences between the production of non-speech stimuli and pseudo-words, both in terms of reaction times (RTs) and in terms of stable electrophysiological patterns at an early time-window. However, and more importantly, we did not find any difference in the electrophysiological patterns occurring in the 300 ms preceding articulation across the three stimulus-types. Experiment 2 (with articulatory suppression), on the other hand, revealed significantly longer RTs for non-speech stimuli as compared to words. ERP results revealed the same global electrophysiological patterns across stimulus-types; however, the microstate duration and the general explained variance (GEV) of two late time windows, encompassing the 300 ms before articulation, were significantly different between non-speech and words, and between non-speech and pseudo-words. Results of Experiment 2 thus suggest that the latest stages of production of those three different stimulus-types recruit the same brain networks but they are involved differently, in terms of duration and timing, when producing non-speech versus pseudo-words and, non-speech versus words. Given that Experiment 1 did not yield any significant difference on late ERPs before articulation, it ensures that results of Experiment 2 were not solely driven by late pre-articulatory processes, but likely by the encoding processes disabled by means of the articulatory suppression task.

Themes: Speech Motor Control, Language Production
Method: Electrophysiology (MEG/EEG/ECOG)

Back