Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Do the eyes retune the ears? MEG evidence that sign language knowledge affects how we process spoken language.

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster E82 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Chiara Luna Rivolta1, Brendan Costello1, Mikel Lizarazu1, Manuel Carreiras1,2,3; 1Basque Center on Cognition, Brain and Language (BCBL), 2Ikerbasque, Basque Foundation for Science, 3Universidad del País Vasco (UPV/EHU)

Neural tracking of the speech envelope seems to support spoken language processing and comprehension. Linguistic information is not limited to the auditory domain and may also be transmitted and perceived through the visual modality, either accompanying the speech signal (visual information on the face and mouth, as well as co-speech gestures) or as an autonomous language code (signed languages). In a previous study we showed that signers’ neural activity tracks the visual linguistic signal during sign language processing, although it is qualitatively and quantitatively different from entrainment to spoken language. In that study, we compared cortical tracking to signed and spoken languages. To characterise the temporal regularities of the visual linguistic signal we used markerless motion tracking. A custom-built Kinect setup allowed us to record the three-dimensional motion information of different points on the body, including the hands, the head and the torso. This Kinect set up was used to record videos of native speakers/signers producing narratives in four different languages (Spanish, Russian, Spanish Sign Language - LSE and Russian Sign Language), and these videos served as stimuli for the experiment. Two groups of hearing participants – 15 proficient LSE signers and 15 matched controls with no knowledge of sign language – watched videos in all four languages while we recorded their neural activity using magnetoencephaolography (MEG). We calculated coherence between the preprocessed MEG data and the linguistic signal (the speech envelope in spoken languages and the speed vector of the right hand in signed languages) and used cluster-based permutation tests to assess statistical differences across experimental conditions. Unexpectedly, when comparing bimodal bilinguals and sign-naive controls the former group showed stronger entrainment to spoken language (mainly in the theta frequency band). Both groups had similar spoken language experience (all participants were native Spanish speakers and had no knowledge of Russian), suggesting that signers’ familiarity with a visual language is impacting how they process spoken language. We plan to explore two possible explanations for this result with a follow-up analyses. In our study the spoken language stimuli included visual information: participants could see the speaker’s facial movements and co-speech gestures. Sign language knowledge may aid (hearing) signers in picking up and synchronising to the visual information available while attending to speech, supporting the tracking of body articulators or facial information (or both). To test this hypothesis we will use the data previously collected and analyse coherence between hands and mouth movements during speech with MEG data, and examine the coherence between the auditory speech signal and the different visual articulators. Based on the results for neural tracking of sign language, we expect signers to show higher coherence to the right hand compared to sign-naive controls; furthermore, since signers showed more coherence to speech in the theta band, we anticipate greater coherence with mouth movements. This result may provide evidence for a cross-modal transfer effect between sign and spoken language: knowing a sign language may change how you perceive and comprehend spoken language.

Topic Areas: Speech Perception, Signed Language and Gesture

SNL Account Login

Forgot Password?
Create an Account

News