You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster C14, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

Localizing Structure-building and Memory Retrieval in Naturalistic Language Comprehension

John Hale1, Shohini Bhattasali1, Jonathan R. Brennan2, Jixing Li1, Wen-Ming Luh1, Christophe Pallier3, R. Nathan Spreng1;1Cornell University, 2University of Michigan, 3INSERM-CEA Cognitive Neuroimaging Unit

Introduction: Our human ability to comprehend natural language probably relies upon at least two cognitive processes. One involves retrieval of memorized elements, while the other encompasses some sort of structural composition. Despite a growing body of work on the brain's language network, the precise manner in which these hypothesized operations are realized across brain regions remains unknown. This study contributes a localization of them, using time-series predictors that formalize both retrieval and structure-building, applied to the analysis of data from a naturalistic listening scenario. Retrieval is formalized here using "multiword expressions" or MWEs. This term from computational linguistics refers very generally to non-compositional expressions. They are "expressions for which the syntactic or semantic properties of the whole expression cannot be derived from its parts." (Sag et al, 2002). In this study, MWEs were located using a statistical tagger trained on examples from the English web treebank (LDC2012T13). Structure-building is formalized using a standard bottom-up parsing algorithm (see Hale, 2014). We computed the number of parser actions that would be required, word-by-word, to build the correct phrase structure tree as determined by the Stanford parser (Klein & Manning 2003). We regressed the word-by-word predictors described above against fMRI timecourses recorded during passive story-listening in a whole-brain analysis. The results implicate bilateral STG for structure-building and AG for memory retrieval. Both regressors activate frontal regions as well, but without overlap. Methods: Participants (n=37, 24 female) listened to a spoken recitation of The Little Prince for 1 hour and 38 minutes across nine separate sections. Participants' comprehension was confirmed through multiple-choice questions administered at the end of each section. BOLD functional scans were acquired using a multi-echo planar imaging (ME-EPI) sequence with online reconstruction (TR=2000 ms; TE's=12.8, 27.5, 43 ms; FA=77 degrees; FOV=240.0 mm X 240.0 mm; 2X image acceleration; 33 axial slices, voxel size 3.75 x 3.75 x 3.8mm). Preprocessing was carried out with AFNI version 16 and ME-ICA v3.2 (Kundu et al., 2011). Along with the parsing and MWE regressors of theoretical interest, we entered four nuisance variables into the GLM analysis using SPM12. One regressor simply marks the offset of each spoken word in time. Another gives the log-frequency of the individual word in movie subtitles (Brysbaert & New 2009). The last two reflect the pitch (f0) and intensity (rms) of the talker's voice. These regressors were not orthogonalized in any way. Results: The statistical map of the fitted coefficient for the bottom-up parsing regressor picks out areas in left posterior STG and left IFG (p < 0.05 FWE). Figure 1 shows both, with bottom-up parsing in orange and MWEs in blue. Conclusion: Memory retrieval for multi-word expressions evokes a pattern of activation that is spatially distinct from the pattern evoked by compositional structure-building. Consistent with previous work, this result underlines the multiplicity of brain systems that contribute to language comprehension (Hickok and Poeppel, 2007; Friederici and Gierhan, 2013; Hagoort and Indefrey, 2014).

Topic Area: Computational Approaches

Back to Poster Schedule