Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

A Shared Representational Code for Object and Event Concepts

Poster E61 in Poster Session E, Saturday, October 8, 3:15 - 5:00 pm EDT, Millennium Hall

Jiaqing Tong1, Leonardo Fernandino1, Stephen Mazurchuk1, Lisa L. Conant1, Jeffrey R. Binder1; 1Medical College of Wisconsin

Introduction: Functional neuroimaging studies have provided evidence that certain regions of the cerebral cortex are differentially activated by object and event concepts, but the mechanisms underlying these differences are still unknown. We hypothesized that these findings reflect systematic differences in the experiential content of these two categories of concepts. We tested this claim using representational similarity encoding of fMRI data, based on a 65-dimensional experiential model of conceptual content. We asked whether an encoding model trained to predict the similarity structure of the neural activation patterns elicited by object concepts would also predict the similarity structure of event concepts, and vice versa. Such a finding would provide strong evidence that both categories are represented by the same underlying experiential dimensions. Methods: Thirty-nine right-handed English speakers were shown 320 English words consisting of 40 items in each of 4 event subcategories (negative, social, verbal, nonverbal sound) and 40 items in each of 4 object subcategories (animal, food, tool, vehicle) using a fast event-related design during 3T fMRI. Each stimulus was presented 6 times across 3 sessions on separate days. Participants rated the familiarity of each word on a 1 to 3 scale. MRI data were preprocessed with the HCP pipeline. As a first step, a univariate contrast was performed between event and object conditions, incorporating numerous lexical variables as nuisance regressors, to identify distinct event and object networks of interest (NOI). A second general linear model with each word as a regressor was built to generate t-maps for each word. A neural representational dissimilarity matrix (RDM) for each category was computed for each NOI. We then trained an encoding model for each NOI to fit the 65-dimensional experiential model RDMs (one RDM per dimension) to either the object or event neural RDM, then tested whether the trained model predicted the neural RDM of the other category. Finally, in a separate vertex-wise encoding analysis, we used ridge regression to find the linear combination of experiential dimensions that best predicted the activation amplitude for event trials (160 words per dimension). The weighted model was then used to predict the average object activation amplitude for each vertex. The predicted object activation maps and event activation maps were then contrasted at the group level, using a one-sample t-test at each vertex, to generate a predicted event-object contrast map. The resulting t-map was visually compared with the observed t-map from the event-object univariate contrast. Results: Both encoding models significantly predicted the neural similarity structures across categories in both networks (p <.0001). The predicted t-map from the vertex-wise encoding analysis closely resembles the pattern observed in the original univariate contrast between event and object conditions. Discussion: These results indicate that object and event concepts rely on a shared representational code based on experiential information, suggesting that differences in neural activation between object and event concepts arise from quantitative differences in the experiential content of these concept categories.

Topic Areas: Meaning: Lexical Semantics, Multisensory or Sensorimotor Integration