Slide Slam C4 Sandbox Series
Comparing phoneme categories in word production and perception with fMRI
Xenia Dmitrieva1, Amie Fairs1, Bissera Ivanova1, Elin Runnqvist1, Bruno Nazarian2, Julien Sein2, Jean-Luc Anton2, Friedemann Pulvermüller3, Sophie Dufour1, Kristof Strijkers1; 1Aix-Marseille University, CNRS, Laboratoire Parole et Langage, 2Aix-Marseille University, CNRS, Institut de Neurosciences de la Timone, Centre IRM, 3Freie Universität Berlin, Brain Language Laboratory
Historically, speech production and perception have been studied separately. Nevertheless, for most of our language behavior the production and perception of speech act in concert, and consequently understanding the degree of neural overlap between the language modalities is becoming an important question for brain-language models. However, most of the cross-modal research has focused on high-level language operations such as conversational dynamics and message-level processing. Research investigating the degree of neural overlap between production and perception for the building blocks of language, namely words, is much scarcer. Therefore, in the present study we compare the production and perception of words with fMRI, and in particular investigate the spatial representations of phonemes, since different brain-language frameworks make different predictions with respect to the cortical circuitry underpinning word form processing in two modalities. According to Partial Separation Models (PSM) during speech production both frontal and temporal regions are recruited during phonological processing, while only temporal regions are needed for speech perception. In contrast, according to Integration Models (IM) the same frontal and temporal regions are involved in both perception and production. Our regions of interest (ROI) include inferior frontal motor cortex (iFMC) and posterior superior temporal cortex (pSTC). In current study, where data collection has recently started, we contrast these hypotheses by exploring the phoneme mapping recruited within the same participants performing both picture naming (production) and passive listening (perception) tasks. Crucially, the same set of stimuli is used across tasks: 40 French nouns which are minimal pairs, starting with bilabial (b/p) or alveolar (d/t) consonants (e.g. bombe [bomb]-tombe [tomb]) that have dissociable topography in iFMC. 40 participants will be tested (as assessed with power analyses). Importantly, the production and perception tasks are designed to be as similar as possible in structure, and all participants will cycle through all stimuli in both tasks while undergoing BOLD imaging. This will allow us to contrast the same phoneme difference (bilabial/alveolar word-initial speech sounds) across the language modalities. Two analyses are planned: first, 4x2 ANOVA with language modality (perception/production), phoneme type (bilabial/alveolar), area (frontal/temporal), and topography within the ROI (ventral/dorsal) as factors. While PSM predict an interaction between language modalities (namely the absence of motor cortex involvement for the phoneme contrast in speech perception), IM predict the same fronto-temporal sources in both modalities for the bilabial Vs. alveolar phoneme contrast. Hence, if the main effect of language modality factor would be observed, it might be an evidence in favour of PSM. Additionally, finding of the main effect of phoneme type on the topography within the ROI could be a circumstantial evidence for IM. Second, we intend to complement the analysis above with multivariate pattern analyses where we will train classifiers to distinguish phoneme type in one modality, and then test those classifiers in the other. Within-, but no cross-modal generalisation would be consistent with predictions of PSM, and within- and cross-modal generalisation would support the IM. Overall, the current study addresses whether word processing recruits the same or different neural circuits in perception and production modalities.