Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Long-distance dependencies in Chinese, English, and French brains

Poster A18 in Poster Session A, Thursday, October 6, 10:15 am - 12:00 pm EDT, Millennium Hall

Donald Dunagan1, Maximin Coavoux2, Shulin Zhang1, Shohini Bhattasali3, Jixing Li4, Christophe Pallier5, Nathan Spreng6, Jonathan Brennan7, John Hale1; 1University of Georgia, 2Université Grenoble Alpes, 3University of Toronto, 4City University of Hong Kong, 5Cognitive Neuroimaging Unit, INSERM-CEA, 6McGill University, 7University of Michigan

In natural language, words can occur arbitrarily far away from the position in the sentence where they (intuitively) make their meaning contribution. While these long-distance dependencies have been extensively studied, this study investigates the brain bases of two specific types, WH-questions and object-extracted relative clauses, using as stimuli translation-equivalent naturalistic texts in Chinese, English, and French. DATA: The fMRI data analyzed are the The Little Prince Datasets [1], a dataset in which Chinese, English, and French participants are scanned while they engage in the naturalistic process of listening to an audiobook of a children's story in their native language. METHODS: The BOLD time series are extracted for twenty-four left hemisphere language network ROIs from the Human Connectome Project Multi-Modal Parcellation 1 [2]. The selected ROIs make up the inferior frontal gyrus, lateral temporal lobe, and temporoparietal cortex. A number of word-by-word metrics are defined in order to capture different aspects of language comprehension. The storybook texts are parsed with near state of the art parsers for their respective languages, with the parse trees being used to identify and then label WH-question and object-relative constructions such that, beginning at the filler and ending at the gap site, each word in the long-distance dependency is annotated with a 1, while all other words are assigned a value of 0. The parse trees are also used to calculate a bottom-up processing metric corresponding to the number of reduce operations in a shift-reduce parser. Additionally, large, autoregressive transformer language models on the scale of GPT2 [3] trained on 14, 40, and 60 GB of data for Chinese, English, and French, respectively, are used to calculate word-by-word surpsisal. Following other neurocomputational models [see e.g., 4], a Bayesian linear regression is fit for each ROI in each language. Regressors of non-interest include spoken word-rate, log lexical frequency, speaker pitch, and root mean squared amplitude of the speaker narration. Word-by-word regressors of non-interest include the bottom-up processing metric and large language model surprisal. The regressors of interest are the word-by-word object-relative and WH-question metrics. The model results are aggregated across the three languages in order to see the cross-linguistic similarities and differences in the neural correlates of long-distance dependency processing. RESULTS: Even after including coregressors in the neurocomputational model which account for lower-level linguistic processing, bottom-up syntactic construction, and word-by-word surprisal, quite a large portion of the left-lateralized language network is implicated in the processing of long-distance dependencies. This reaffirms the cognitive demand that these construction types impose upon the brain's language network. In all three languages, object-relative processing is associated with an increase in activity in the left middle and posterior temporal lobe as well as left temporoparietal cortex. For WH-question processing, all three languages show an increase in activation in the left inferior frontal gyrus and the left posterior superior temporal sulcus. These results are interpreted with respect to syntactic processing in the middle and posterior temporal lobe [5], argument storage in temporoparietal cortex [6], and argument (re)analysis in the inferior frontal gyrus [7].

Topic Areas: Computational Approaches, Syntax