Activation of Speech, Taste, and Visual Scene Experiential Content During Concept Retrieval
Poster D35 in Poster Session D with Social Hour, Friday, October 7, 5:30 - 7:15 pm EDT, Millennium Hall
Stephen Mazurchuk1, Leonardo Fernandino1, Jia-Qing Tong1, Lisa L Conant1, Jeffrey R Binder1; 1Medical College of Wisconsin
INTRODUCTION: Some authors have proposed that concept retrieval involves reactivation of the perception, action, and other systems that contributed to the formation of the corresponding concept, although this idea remains controversial. We investigated this hypothesis by mapping the cortical regions that encode information about specific experiential features of word meaning, as derived from human ratings. We chose to study “speech,” “taste,” and “scene” features as they are highly distinct types of experience. “Speech” represents the degree to which a word refers to “someone or something that talks.” We hypothesized that such content would be represented in left superior temporal and inferior frontal regions involved in speech perception and production. “Taste” codes the degree to which a word refers to something “having a defining taste,” which we hypothesized would engage insular and orbitofrontal cortices involved in taste perception. Finally, “scene” codes the degree to which a word “brings to mind a particular setting or physical location,” which we hypothesized would engage parahippocampal and posterior medial cortices involved in scene perception. METHODS: Forty right-handed native English speakers were scanned using simultaneous multi-slice fMRI. Stimuli consisted of 320 nouns, each presented 6 times over 3 sessions occurring on separate days. Stimuli were presented visually in an event-related design while participants performed a familiarity judgment task. BOLD time series data were preprocessed and projected to a common surface using fMRIprep. Word specific activation maps were generated via GLM. For the RSA regression, the three features of interest were transformed into representational dissimilarity matrices (RDMs) and these RDMs were then individually normalized to unit length. The three predictors were then used in Ordinary Least Squares regression to fit individual participant neural RDMs in a surface-based searchlight analysis with 5-mm radius patches. The resulting beta values from the 40 participants were tested for significance by a t-test against zero, with p-values family wise error corrected using permutation testing. We used a cluster-forming threshold of p < .001 and a cluster-level significance level of α < .01. RESULTS: Beta values for the Speech feature were significantly different from zero in several left hemisphere structures, including the inferior frontal gyrus, inferior precentral gyrus, anterior and posterior superior temporal sulcus, angular gyrus (AG), and posterior cingulate cortex (pCC). The Scene feature was significant bilaterally in retrosplenial cortex and pCC, left parahippocampus, left superior frontal gyrus, and the left AG. Taste was significant in the orbitofrontal cortex and inferior frontal sulcus. DISCUSSION: The feature maps align with the hypothesis that cortical networks supporting the representation of each feature during semantic word processing include areas involved in the perception of the corresponding features. One exception, namely the insula not being significant for the Taste feature, becomes significant at cluster-level significance threshold of α < .03. In future work we plan to analyze other experiential features, and compare RSA based feature maps to those derived from voxel-wise encoding models.
Topic Areas: Meaning: Lexical Semantics, Multisensory or Sensorimotor Integration