My Account

Poster A35, Tuesday, August 20, 2019, 10:15 am – 12:00 pm, Restaurant Hall

Mapping Multimodal Convergence Zones Using Representational Similarity Analysis and a High-Dimensional Grounded Model of Conceptual Content

Jiaqing Tong1, Leonardo Fernandino1, Colin Humphries1, Lisa Conant1, Joseph Heffernan1, Jeffrey Binder1;1Medical College of Wisconsin

The nature of conceptual representations in the brain is a topic of ongoing debate. There is strong evidence that concept retrieval entails varying degrees of activation of sensory-motor cortical areas, depending on the experiential content of the concept. Presumably, however, these experiential representations become progressively more abstract at higher processing levels as featural information is combined within and across modalities. The aim of the current study was to test whether this abstraction process involves loss of experiential information, i.e., conversion of a multimodal representation to an amodal one. We used representational similarity analysis (RSA) of fMRI data to identify cortical areas that encode conceptual similarity structure as defined by a combination of 65 sensory-motor-affective-spatial-temporal-cognitive experiential dimensions (Binder et al., 2016). The existence of such regions would call into question the assumption that multimodal abstraction necessarily entails a complete loss of experiential information. Methods: Nineteen healthy, right-handed, native English speakers were shown 242 English words (141 nouns, 62 verbs, and 39 adjectives) during 3T BOLD fMRI using a fast event-related design. Each stimulus was presented visually 6 times over 2 scanning sessions. Participants were asked to think about the meaning of each word. To encourage compliance, on 10% of trials a semantic decision task was presented after the stimulus, in which the participant saw 2 words and had to choose the word most similar in meaning to the previous stimulus. A single general linear model of the BOLD signal included each of the words as 242 regressors of interest. This analysis generated beta coefficient maps for each word in each participant, which were then registered to the Human Connectome Project surface template. A surface-based RSA searchlight procedure was used to identify cortical regions in which activation patterns encoded information about the words. Neural dissimilarity matrices (DSMs) were computed for 5mm radius patches around each vertex on the cortical surface. The conceptual DSM, based on the 65-dimensional experiential model of concept representation, was computed as the pair-wise cosine similarities between the vector representations of the 242 test concepts. The Pearson correlation between the neural DSM and the conceptual DSM was computed for each surface patch. Correlation values were then converted to Fisher Z scores, and a group level t-test of these values (against zero) was computed. The thresholded (p < .001) t map was corrected for multiple comparisons via permutation testing. Results: Semantic similarity computed from the experiential model was significantly correlated with neural similarity in multiple heteromodal regions, including bilateral lateral and ventral temporal cortex, bilateral angular gyrus, left inferior frontal gyrus pars triangularis, bilateral superior frontal gyrus, left medial prefrontal cortex, and bilateral posterior cingulate gyrus and precuneus. Discussion: The neural representation of concepts in heteromodal cortical areas reflects multi-dimensional experiential information. These results call into question the existence of completely amodal concept representations. Binder, J. R., Conant, L. L., Humphries, C. J., Fernandino, L., Simons, S. B., Aguilar, M., & Desai, R. H. (2016). Toward a brain-based componential semantic representation. Cogn Neuropsychol, 33: 130-174

Themes: Meaning: Lexical Semantics, Multisensory or Sensorimotor Integration
Method: Functional Imaging

Back