Slide Slam J14 Sandbox Series
An fMRI Localizer for American Sign Language Comprehension
Brennan Terhune-Cotter1, Stephen McCullough1, Karen Emmorey1; 1San Diego State University
When testing hypotheses about the functional activation of neural regions via fMRI, researchers commonly contrast a task involving the cognitive process of interest with a control ‘localizer’ task to isolate neural activity unique to that cognitive process. Localizer tasks are commonly designed on a per-experiment basis, but consequently results cannot be directly compared across studies. A solution is to create a standardized localizer task known to reliably activate neural areas associated with a particular cognitive process; however, because the language network is quite distributed and heterogeneous, traditional localization methods have not been successful. Fedorenko et al. (2010, 2011) designed a robust language localizer task using group-constrained subject-specific functional regions of interest which was able to elicit activation patterns specific to high-level spoken language processing (see also Scott et al., 2017). We designed localizer tasks for American Sign Language (ASL) to elicit linguistic processing at the lexical, syntactic, and discourse levels, contrasted with degraded (blurred) versions of the same stimuli acting as a baseline condition. The subject views a series of 17-second video clips in three conditions (lexical, syntactic, narrative) and presses a button between video clips to help maintain attention. The lexical condition consists of lists of nouns and verbs, which also allow us to contrast activation patterns for comprehending each class of words. The syntactic condition includes the same words as in the lexical condition, which are rearranged into complete, unrelated sentences. By matching the words across lexical and syntactic conditions, we will be able to contrast the two conditions and isolate activity related to syntactic processing over and above lexical retrieval. The narrative condition consists of excerpts from a story (Alice in Wonderland) with narrative prosody, character facial expressions, and use of dialogue and classifier constructions, as is typical of ASL storytelling. We predict that contrasting the three language conditions with the baseline condition will reveal activation in supramodal frontotemporal language areas. We also predict that contrasts between lexical, syntactic, and narrative conditions will increasingly activate bilateral parietal areas such as the supramarginal gyrus and superior parietal lobule, which have been previously associated with processing “spatial syntax” and classifier constructions, features unique to sign languages. Pilot data with one deaf signer, in which two scanning sessions were conducted one month apart, demonstrated consistent and robust activation across localizer conditions in frontotemporal and parietal language areas. These localizers will enable us to a) isolate functional regions of interest (fROIs) for lexical, syntactic, and narrative levels of ASL processing, b) examine patterns of inter-subject variation, and c) assess whether fROIs are activated in domain-specific or domain-general ways.