My Account

Poster D38, Wednesday, August 21, 2019, 5:15 – 7:00 pm, Restaurant Hall

No language unification without neural feedback: How awareness affects combinatorial processes

Valeria Mongelli1,2,3, Erik L. Meijs4, Simon van Gaal2,3,4, Peter Hagoort1,4;1Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, 2Department of Psychology, University of Amsterdam, 3Amsterdam Brain and Cognition (ABC), University of Amsterdam, 4Donders Institute for Brain, Cognition and Behaviour, Radboud University

How does the human brain combine a finite number of words to form an infinite variety of sentences? Describing the neural network subserving sentence processing, and what differentiates this network from single word processing, is a major, still unachieved challenge in brain research. According to the Memory, Unification and Control (MUC) model, both semantic and syntactic combinatorial processes require feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of linguistic information from sensory regions to LPTC. Here, we tested this core prediction of the MUC model by reducing visual awareness of words using a masking technique. Masking disrupts long-range feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that, at least under certain conditions, masked single words still elicit modulations of ERP components like the N400, a neural signature of linguistic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed four experiments in which we measured electroencephalography while 40 subjects performed a masked priming task. In Experiments 1 and 2, we investigated semantic combinatorial processing. Masked or unmasked words were presented either successively or simultaneously, thereby forming a short sentence (e.g. man pushes woman) that could be congruent or incongruent with a target picture. This sentence condition was compared with a single word condition, in which single words (e.g. man) were followed by congruent/incongruent pictures. Experiment 3 and 4 aimed to test syntactic combinatorial processing. A masked/unmasked prime (e.g. he) was followed by an unmasked target (e.g. drives), forming syntactically correct or incorrect combinations (he drives vs. he *drive). This sentence condition was compared with a typical semantic priming task, in which masked/unmasked primes and unmasked targets formed congruent or incongruent combinations (e.g. cat-dog vs. cat-nurse). Overall, we found that both semantic and syntactic combinatorial processes are impaired when disrupting long-range feedback with masking. Indeed, no ERP modulation was found in the masked sentence condition across all experiments. On the contrary, in all unmasked sentence conditions incongruent trials triggered an N400 effect. We also found an N400 effect in the masked single word condition of Experiments 1 and 2, but not in Experiments 3 and 4. This suggests that experimental settings strongly affect masked single word priming. Our results suggest that feedback processing from the LIFC to LPTC is required for semantic and syntactic combinatorial processes but not for single word processing, supporting a core prediction of the MUC model. These findings provide an important contribution to ongoing debates about the specific roles of different brain regions in a distributed language network.

Themes: Meaning: Combinatorial Semantics, Meaning: Lexical Semantics
Method: Electrophysiology (MEG/EEG/ECOG)

Back