Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Functional and structural connectivity underlying silent visual speech perception

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster D115 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Maëva Michon1, Francisco Aboitiz1; 1Laboratory for Cognitive and Evolutionary Neuroscience, Faculty of Medicine, Pontificia Universidad Católica de Chile

Relevant visual information is available in speakers’ faces during face-to-face interactions that improve speech intelligibility. Visual speech processing requires to process faces that are conveying linguistic information by the mean of speech effector movements. Additionally, visual and auditory speech perception often co-occur during face-to-face conversations. The perception of faces is known to elicit brain activity in the fusiform face area (FFA), with preferential responsiveness in the right hemisphere. In contrast, a region of the fusiform gyrus called the visual word form area (VWFA) preferentially responds to written words and letters in the left hemisphere, suggesting a specialization for the visual processing of linguistic symbols. Recently, a third visual pathway (TVP) was identified in both human and non-human primate’s right hemisphere that is proposed to support social perception, and particularly the visual processing of biological motion such as dynamic faces. The TVP projects on the lateral surface on the brain from V1 to the anterior temporal lobe via the superior temporal sulcus, a region known for its role in the integration of multimodal information. The aim of the current study is to describe the neural circuitry underlying visual speech perception, to explore the possible recruitment of the TVP for multimodal integration of speech and to disentangle the contribution of both hemisphere in this process. We designed a lip-reading task in which participants were presented shown video clips of silently spoken words and had to identify the target word among three written words distractors presented subsequently, which differed in their visemic distance from the target. We demonstrated that participants discriminated target words above chance and that error rate significantly decreased with increasing visemic distance from target words. We then acquired functional (task and resting-state) and diffusion MRI data from 24 healthy participants. Functional connectivity analyses of the task are being performed to contrast the brain activity elicited by the perception of faces silently articulating words versus faces producing backward speech or static faces. Resting-state functional connectivity as well as tractography analyses will be performed and correlated with the performance on the behavioral lip-reading task.

Topic Areas: Multisensory or Sensorimotor Integration, Speech Perception

SNL Account Login

Forgot Password?
Create an Account

News