You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

Poster C41, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

Language and multiple demand regions jointly predict individual differences in sentence comprehension: Evidence from a network approach

Qiuhai Yue1, Randi C. Martin1, Simon Fischer-Baum1, Michael W. Deem1;1Rice University, Houston, TX, USA

Recent neuroimaging studies have shown that domain-specific language regions (e.g., activated by a contrast of simple sentences vs. nonword processing conditions), which lie in the temporal and ventral frontal regions, dissociate with domain-general multiple demand (MD) regions (e.g., activated by hard vs. simple working memory conditions) in the dorsal frontal and parietal lobes (Fedorenko et al., 2011; 2013). Thus, some have claimed that the neural substrate for typical sentence processing does not involve MD regions, with domain-general regions only activated by complicated sentence structures (e.g., strong garden paths, object relatives) rarely encountered in natural conversation. The current project addresses these claims by examining whether individual differences in comprehension of unambiguous sentences with commonly encountered structures could be predicted by measures of brain network connectivity. Specifically, building on recent work linking lower network modularity to better performance on complex tasks (Yue et al., in press), we investigated whether modularity defined solely based on the language network or modularity defined by a network comprised of both language-specific and domain-general MD nodes better predicts individual differences in sentence comprehension. Our sentences contained a main clause and a subordinate clause (subject relative or sentence complement) that separated the subject and verb of the main clause (e.g., “The surgeon who operated with the difficult nurse last night complained”). Following Tan, Martin, and Van Dyke (2017), we manipulated sentence difficulty by varying whether the noun in the subordinate clause was semantically plausible or implausible as the agent of the main clause verb (e.g., “tool” vs. “nurse”; low vs. high semantic interference) and whether the noun in the subordinate clause was a prepositional object or another subject (e.g., subject: “The doctor who said the nurse was difficult last night complained”; low vs. high syntactic interference). Reading times for sentences and RT and accuracy for comprehension questions were collected. For the brain network analysis, modularity was estimated for each of 40 subjects based on the correlation matrix for functional connectivity between nodes during resting-state from the language and MD networks separately, from the combination of the two, and from the whole brain. Replicating prior results (Van Dyke, 2007; Tan et al., 2017), significant semantic and syntactic interference effects were observed in sentence reading and question answering times. Importantly, better performance in resolving semantic interference in sentence reading times was associated with lower modularity derived from the network combining language and MD regions (r=0.407, p=0.009), but had no relationship with the modularity of the language network per se (r=-0.06, p=0.7). At the level of the whole brain network, lower modularity correlated significantly with better performance in resolving syntactic interference in sentence reading times (r=0.326, p=0.04), and marginally so with lower semantic interference in question RTs (r=0.295, p=0.06), implying that regions outside those in the language and MD networks are also recruited in sentence processing. Taken together, our findings argue against claims that only language-specific regions are involved in sentence comprehension. The current study provided a novel framework for investigating the neural basis of language processing from a network perspective.

Topic Area: Meaning: Combinatorial Semantics

Back to Poster Schedule