You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

Poster A44, Wednesday, November 8, 10:30 – 11:45 am, Harborview and Loch Raven Ballrooms

Different contextual effects modulate the representation of word meaning in the human brain

Christine Tseng1, Leila Wehbe1, Fatma Deniz1, Jack Gallant1;1University of California, Berkeley

Context crucially affects how the human brain processes words. Neuroimaging studies have shown that words in sentences elicit more brain activity than isolated words or meaningless sentences [1], and that brain activity elicited by narratives is more widespread than activity elicited by isolated words [2]. However, it is unclear whether these contextual effects reflect semantic context or linguistic structure. For example, consider the brain representation of the word “apples” presented in isolation, versus in the sentence “She preferred unripe apples to juicy oranges.” The sentence clearly provides a richer semantic context than the word presented alone, and this might affect how the brain represents the concept, “apples.” Alternatively, the linguistic structure of the sentence confers a particular meaning on “apples” that is not present when the words are in a different order, and this might also affect the brain representation of “apples.” To characterize the effects of semantic context and linguistic structure, we designed an fMRI experiment with four stimulus conditions: Single Words: randomly sampled words from ten narratives from [2]; Blocks: groups of 114 conceptually similar words sampled from 12 clusters (created by projecting the narrative words into a word co-occurrence space); Sentences: 231 factual sentences; and Narratives: ten narratives from [2]. The Blocks condition is crucial because it provides semantic context without linguistic structure. Thus, comparison of activity elicited by Blocks to Single Words reveals how semantic context affects the brain representation of words. Analogously, comparison of brain activity elicited by Blocks to Sentences and Narratives reveals the effect of linguistic structure. Before data collection we constructed a semantic feature space based on word co-occurrence statistics calculated over a large text corpus, and we projected all stimulus words into this space. Then we collected six hours of fMRI data for each subject. (In all conditions, words were presented one-at-a-time using RSVP.). Finally, for each condition, we modelled every voxel in every individual subject as a linear sum of weighted features. We used these models to predict voxel responses in a separate validation dataset using a different set of words. Prediction performance was quantified as the correlation between predicted brain responses and measured responses. This voxel-wise modeling approach identifies semantically selective voxels across the entire cerebral cortex for each individual subject and experimental condition. We find that most voxels in bilateral temporal, parietal and prefrontal cortex are semantically selective in the Narratives, Sentences, and Blocks conditions. In contrast, very few voxels show semantic selectivity for the Single Words condition. Prediction performance is significantly higher for Blocks in comparison to Single Words. Because the Single Words condition does not have semantic context, we conclude that semantic context affects how the brain represents word meaning. However, prediction performance for Blocks was overall lower than for Sentences, which in turn had worse prediction performance than Narratives. Because the Blocks condition does not have linguistic structure, we conclude that linguistic structure also modulates the brain representation of word meaning. REFERENCES: [1] Fedorenko et al., Neuropsychologia, 2012; [2] Huth et al., Nature, 2016

Topic Area: Meaning: Lexical Semantics

Back to Poster Schedule