Multiple brain regions show modality-invariant responses to event semantics
Poster D25 in Poster Session D with Social Hour, Friday, October 7, 5:30 - 7:15 pm EDT, Millennium Hall
Anna Ivanova1,2, Carina Kauf1,2, Nancy Kanwisher1,2, Hope Kean1,2, Tanya Goldhaber1, Zachary Mineroff1, Zuzanna Balewski3, Rosemary Varley4, Evelina Fedorenko1,2; 1MIT, 2McGovern Institute for Brain Research, 3UC Berkeley, 4University College London
Whenever we perceive an event unfolding in the world, we evaluate it using not only the percept itself, but also our existing semantic knowledge — generalized, abstract information about entities, actions, and ideas associated with that event. The ability to flexibly leverage this body of knowledge to achieve specific goals is a key component of human behavior. Here, we investigate the brain basis of task-driven semantic processing of events. We isolate semantic processing from lower-level processing by comparing representations of events across input modalities: sentences vs. pictures. We report the results from three fMRI experiments. In each experiment, participants viewed blocks of events presented as either sentences or pictures. In half of the blocks, participants performed a semantic task, which required accessing information about the event content. In the other half of the blocks, they performed a low-level perceptual task, which required tracking the stimulus on the screen rather than processing its contents. To test generalizability of our findings across semantic tasks (plausibility vs. reversibility judgments), event types (animate-animate vs. animate-inanimate interactions), and picture types (photos vs. drawings), we varied these design features across the three experimental setups. In addition, each participant completed two well-validated ‘localizer’ tasks: one for the language-selective network (Fedorenko et al., 2011) and one for the multiple demand network, which has been implicated in task-driven goal-directed behaviors across domains (Duncan, 2010). We additionally used the multiple demand localizer to localize the default mode network. We first conducted a whole-brain univariate analysis to identify brain regions that showed higher responses to the semantic task compared to the perceptual task for both sentences and pictures, validating these results using held-out data. To account for inter-individual variability in the functional organization of the brain, we used the group-constrained subject-specific (GcSS) localization method (Fedorenko et al, 2010). This analysis revealed a set of brain regions that show strong, modality-invariant responses to semantic tasks. These regions are located in left lateral prefrontal cortex, left temporo-occipito-parietal cortex, and right cerebellum. We then compared the locations of these amodal semantic regions and three well-characterized cognitive networks (language, multiple demand, and default mode) and found little overlap between them. Our results highlight the distinction between linguistic and semantic processing and suggest that event semantics engages dedicated, non-domain-general neural machinery.
Topic Areas: Meaning: Combinatorial Semantics, Control, Selection, and Executive Processes