You are viewing the SNL 2017 Archive Website. For the latest information, see the Current Website.

 
Poster C13, Thursday, November 9, 10:00 – 11:15 am, Harborview and Loch Raven Ballrooms

The cortical organization of syntactic processing in American Sign Language: Evidence from a parametric manipulation of constituent structure in fMRI and MEG

William Matchin1, Agnes Villwock1, Austin Roth1, Deniz Ilkbasaran1, Marla Hatrak1, Tristan Davenport1, Eric Halgren1, Rachel Mayberry1;1University of California San Diego

INTRODUCTION: Neuroimaging research on spoken languages has mapped basic lexical access and combinatorial processing onto posterior and anterior temporal lobe regions (Hickok & Poeppel, 2007; Pallier et al., 2011). Despite the fact that American Sign Language (ASL) uses a different sensory-motor modality of communication than spoken languages, it has the same linguistic architecture. This suggests that higher-level lexical access and syntactic processing in ASL could involve the same cortical systems as spoken languages. Previous research has also shown that the areas activated by ASL and spoken languages are similar for both lexical-semantic access and sentence comprehension (Neville et al., 1998; Mayberry et al 2011; Leonard et al., 2012). However, the neural basis of combinatorial processing has not been clearly established. Previous research has yet to identify combinatorial effects in the anterior temporal lobe for signed sentences compared to lists (MacSweeney et al., 2006, for British Sign Language). Additionally, it is unclear whether ASL shows the same tight correlation with constituent structure in language-related brain regions as has been demonstrated in written French (Pallier et al., 2011). In order to identify the neural networks involved in lexical access and combinatorial syntactic processing in ASL, we performed a parallel functional magnetic resonance imaging (fMRI) and anatomically-constrained magnetoencephalography (aMEG) study with a parametric manipulation of syntactic structure. METHODS: Subjects were 13 right-handed, native deaf signers of ASL. Stimuli were all sequences of six signs presented at three levels of syntactic complexity: (i) unstructured word lists consisting of nouns, (ii) simple two-word sentences consisting of nouns and verbs, and (iii) six-word complex sentences. Stimuli were presented in blocks of three sequences of the same condition. Subjects determined whether a probe picture presented following an entire block for fMRI and each sequence for MEG matched a sign in the preceding sequence. To control for basic visual stimulation and attention in the fMRI experiment, subjects watched a still image of the signer and detected the intermittent presentation of a fixation cross. To identify brain regions involved in lexical access, we subtracted activation to the still image condition from the unstructured word lists. To identify brain regions involved in combinatorial operations, we looked for brain areas where activity correlated with constituent size. RESULTS: The fMRI results showed that the contrast of unstructured word lists to the control condition elicited activity in bilateral occipital-temporal regions involved in motion, object recognition, and lexical access. Our parametric analysis of sentence structure in fMRI revealed activity in the left posterior and anterior portions of the superior temporal sulcus, with overlap between lexical access and syntactic processing in the posterior temporal lobe. In aMEG, six-word sentences minus unstructured word lists showed increased activity for sentences in the temporal pole within the N400 time window (300–500 ms). Our results demonstrate that the cortical organization of ASL parallels that of spoken language. The underlying neural basis of abstract lexical access and syntax does not appear to be altered by the sensorimotor channel of language comprehension.

Topic Area: Signed Language and Gesture

Back to Poster Schedule