Slide Slam

< Slide Slam Sessions

Slide Slam B3

Cortical Representations of Concrete and Abstract Concepts in Language Combine Visual and Linguistic Representations

Slide Slam Session B, Tuesday, October 5, 2021, 12:30 - 3:00 pm PDT Log In to set Timezone

Jerry Tang1, Amanda LeBel1, Alexander G. Huth1; 1University of Texas at Austin

To process natural language, the human brain relies on semantic representations that store knowledge acquired through perception and language. Some previous neuroimaging studies have found that semantic representations reflect perceptual properties of concepts, while others have found that semantic representations reflect word associations learned from language. However, little is known about how perceptual and linguistic information are combined in each semantically selective cortical region, and whether different concepts are represented by different amounts of perceptual and linguistic information. To address these issues, we constructed computational models of how visual and linguistic information combine to form semantic representations. We modeled visual representations using image embeddings extracted from a convolutional neural network. A novel propagation method was used to model abstract words by combining image embeddings of associated concrete words. We modeled linguistic representations using distributional word embeddings that capture word co-occurrence statistics across a large corpus. We then combined the visual and linguistic embeddings for each word in different ratios to create semantic embedding spaces, which model different hypotheses for how visual and linguistic information are combined in the semantic system. We compared the different semantic embedding spaces to concept representations in each cortical region using a natural language fMRI experiment. Subjects listened to 5 hours of narrative stories from The Moth Radio Hour, and voxelwise encoding models were estimated to predict BOLD responses for each subject from the stimulus words. Comparing encoding model performance, we found that cortical regions near the visual system represent concepts by combining visual and linguistic information, while regions near the language system represent concepts using mostly linguistic information. Assessing individual concept representations near visual cortex, we found that more concrete concepts contain more visual information. Notably, however, we found that even many abstract concepts contain some amount of visual information from linguistically associated concrete concepts. Our results provide a computational account of how visual and linguistic information are combined to represent concrete and abstract semantic concepts across cortex.

< Slide Slam Sessions

SNL Account Login

Forgot Password?
Create an Account

News