Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Speech processing beyond linguistic contents: a model-guided MEG study

Poster C72 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Yaqing Su1,2, Itsaso Olasagasti1,2, Anne-Lise Giraud1,2,3; 1University of Geneva, Geneva, Switzerland, 2Swiss National Centre of Competence in Research “Evolving Language”, 3Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, Paris, France

The use of language, both reception and production, generally goes beyond linguistic forms and engages internal representations of the world that are shared across cognitive domains to guide behavior. To better understand the interplay between language and internal world representations, it is important to study neural processing of language that involves the extraction of information beyond its linguistic content. However, investigations on the neural substrates of human language largely focus on identifying neurophysiological landmarks of specific linguistic processes, such as semantics and syntax, and ignore its behavioral relevance. Recent neurophysiological studies that tap beyond linguistic aspects of language comprehension are mostly based on functional magnetic resonance imaging (fMRI) during reading comprehension. Notably, a line of research from Fedorenko and colleagues shows that, on the one hand, language-specific processing is carried out in a neuronal network that is dissociable from networks that are involved in domain-general behavioral tasks. On the other hand, these networks, especially the language and the default-mode networks, are co-activated during language comprehension tasks that require the inference of non-linguistic information. However, due to the temporal limitation of fMRI, it is impossible to interpret from such co-activation the real-time interactions among the involved brain areas, which are particularly crucial in understanding the processing of highly dynamic and ambiguous speech signals. To fill this gap, we devised a magnetoencephalography (MEG) study guided by a computational model of hierarchical information passing, aiming to explore the neural-computational mechanisms that subserve the dynamic extraction of non-linguistic information from natural speech signals. During the MEG experiment, subjects perform several speech comprehension tasks that engage different internal information-passing models that focus on extracting different aspects of non-linguistic information. By contrasting neural responses across task conditions, our primary goal is to identify spatial, temporal and spectral characteristics of neural information passing between linguistic and non-linguistic processing. We show our pilot results and discuss plans for further analysis.

Topic Areas: Signed Language and Gesture, Meaning: Combinatorial Semantics

SNL Account Login

Forgot Password?
Create an Account

News