Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Exploring the neural basis of phonemic representations from sounds and vision.

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster C82 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Alice Van Audenhaege1, Stefania Mattioni1,2, Filippo Cerpelloni1, Remi Gau1,3, Olivier Collignon1; 1Psychological Sciences Research Institute (IPSY) - UCLouvain, Belgium, 2Ghent University, Belgium, 3McGill University

INTRODUCTION : Speech is a multisensory signal that we can decipher from the voice and/or the lips. If the successive computational steps necessary to transform the auditory signal into abstract language representations have been extensively explored, little is known on how the visual input of speech is processed in the brain; and how auditory and visual speech information converge onto a unified linguistic percept. In this study, we focus on the minimal abstract units of language, i.e. the phonological level. We aim to identify brain regions that are involved in auditory phonology (phonemes) and visual phonology (visemes). In particular, we aim to explore whether some brain regions represent both auditory and visual phonological representations, potentially in an abstract fashion. METHOD : We rely on functional magnetic resonance imaging (fMRI) combined with searchlight multivariate patterns analyses (MVPA) in healthy adults to characterize brain regions that represent phonological information from vision and audition. More precisely, we classify brain activity patterns evoked by a limited set of consonant-vowel syllables, composed of 3 perceptually distant consonants and 3 perceptually distant vowels, presented either auditorily (speech) or visually (lipreading). RESULTS : Preliminary analyses suggest that a network of visual, auditory, motor and frontal regions are involved in visemes recognition. Interestingly, auditorily defined phonological regions (in superior temporal gyrus - STG) seem to be involved in visual phonological representations as well. In line with previous literature, we are able to decode auditory phonemes in the classical speech perception network (auditory, motor, frontal areas). Moreover, overlap between auditory and visual decoding in mid- and posterior STG and in motor cortex indicate that these regions could be involved in the integration of auditory and visual speech phonology. We will then perform cross-modal classification between auditory and visual phonological representation in these multisensory regions to evaluate whether they implement a shared abstract representation for auditory and visual phonology. In addition, our analytical approach will be further extended using individually defined regions of interests (namely auditory phonological regions in STG, face- and word selective areas in ventral occipito-temporal cortex) from functional localizers that were acquired in all our participants.

Topic Areas: Signed Language and Gesture, Multilingualism

SNL Account Login

Forgot Password?
Create an Account

News