Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Phonoligical networks in speech perception and production tested with fMRI

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E6 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Xenia Dmitrieva1,4, Jean-Luc Anton2,4, Amie Fairs1, Bissera Ivanova1,4, Elin Runnqvist1,4, Bruno Nazarian2, Julien Sein2, Friedemann Pulvermüller3, Sophie Dufour1, Kristof Strijkers1,4; 1Aix-Marseille University, CNRS, Laboratoire Parole et Langage, 2Aix-Marseille University, CNRS, Institut de Neurosciences de la Timone, Centre IRM, 3Freie Universität Berlin, Brain Language Laboratory, 4ILCB

In human communication speech production and perception are tightly interlinked and historically have been studied separately. However, understanding the nature of this link and the degree of neural overlap between those two modalities, is a crucial question for brain-language models. Therefore in the current study we aim to compare phonological processing across word perception and production. Two classes of brain-language models with different predictions can be contrasted: Partial separation models (PSM) and Integration models (IM). According to PSM during speech production both frontal and temporal regions are recruited during phonological processing, while only temporal regions are needed for speech perception. In contrast, according to IM, both speech production and perception recruit the same frontal and temporal regions during phonological processing. In the current fMRI study to contrast these two models we are applying phoneme mapping utilising contrast between bilabial and alveolar phonemes, since there is evidence for dissociable brain activity in the inferior frontal motor cortex (iFMC) and the posterior superior temporal cortex (pSTC) between bilabial (b/p) and alveolar (d/t) phonemes in both production and perception. Specifically, we are using minimal phonological pairs of nouns starting with bilabial b and p, and alveolar d and t consonants (e.g., bilabilal: “ballon” Vs. alveolar: “talon”). The same 44 native French speakers performed picture naming (production) and passive listening (perception) tasks. In addition, a functional localiser task was included to define motor regions associated with production of lip (bilabial) and tongue (alveolar) consonants for each participant individually. Repeated measures ANOVA were performed to assess whether the same phoneme-specific regions in the motor cortex and the temporal cortex would be recruited during speech production and perception (as predicted by IM), or not (PSM). As expected, results in the motor cortex showed that in production bilabial-initial items recruited the lip-associated ROI more strongly than words starting with alveolar consonants and vice versa for the tongue-associated ROI. Interestingly, passive listening to bilabial-initial words also demonstrated stronger activation in the exact same lip-associated region as during production compared to listening to alveolar-initial items. For the tongue-associated ROI the effect did not reach significance in perception. In other words, both in production and perception a significant somatotopy-by-phoneme-specific effect was present, but with stronger magnitude in production. A similar pattern emerged in the temporal cortex, where the same regions for phoneme processing seem to be recruited across the two modalities, but again stronger so in production. Taken together, our data suggests evidence towards phonological networks being shared across the language modalities as predicted by IM.

Topic Areas: Language Production, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News