Slide Slam E1
Cross-modal effects of pseudo-sign articulation (overt and covert) on the extrastriate cortex: an adaptation fMRI study
Stephen McCullough1, Karen Emmorey1; 1SDSU
Tian and Poeppel (2013) found that both overt and covert speech enhanced activation in auditory cortex: overt speaking and imagined speaking both increased the neural response to the same (subsequently presented) auditory syllable. In contrast, auditory imagery (imagine hearing the syllable) and actually hearing the syllable both suppressed the neural response to the heard syllable probe (a repetition priming effect). We investigated whether similar effects occur in visual-manual language: American Sign Language (ASL). We created grayscale videos of a right hand producing eight different pseudosign syllables (probe stimuli) and 24 scrambled videos (adaptor stimuli) with a transparent square cue in the middle (gray for overt and black for covert production). Each square also had one of eight different false font pictographs, corresponding to each pseudo-sign. Prior to scanning, deaf signers learned the association between each pseudosign and pictograph. Participants also learned to articulate the pseudosigns overtly or covertly, depending on the color of square cue in the scrambled videos. The fMRI study consisted of four event-related fMRI adaptation scans and two blocked-design functional localizers. The functional localizer scans always followed the adaptation scans. The first localizer (LOC1) identified the cortical regions involved in viewing hand and foot motor actions. The stimuli for LOC1 consisted of a randomized order of 20s blocks showing videos of pseudosigns, foot motor actions, and scrambled videos without the square cue or pictographs. We instructed participants to pay attention to the LOC1 stimuli silently during the run. The second localizer (LOC2) identified the regions involved in producing hand motor actions. The LOC2 stimuli consisted of a randomized order of 20s hand or foot blocks showing scrambled videos with pictograph cues. Participants produced either hand or foot motor actions corresponding to the pictographs (learned prior to scanning). For the event-related fMRI adaptation, participants viewed a total of 256 trials of video pairs (2s each) separated by 1s. The first video of each pair (adaptor) was always selected randomly from four categories: overt articulation (AV), covert articulation (CV), visual imagery (VI), visual presentation (V). The second probe video always showed a pseudo-sign that was either the same pseudosign or a different pseudosign from the adaptor stimulus. We use the conjunctions of neural activation clusters identified in both LOC1 and LOC2 as the regions of interest (ROIs) for our analysis of BOLD responses acquired from the adaptation scans. Preliminary results for cross-modal adaptation from sign production to sign perception indicate repetition suppression (reduced BOLD response) occurred in the extrastriate cortex when a covertly articulated pseudosign (adaptor stimulus) was the same as a visually presented pseudosign (probe stimulus) vs. when a different pseudosign was articulated as the adaptor. However, no repetition suppression was observed when the adapting stimulus was an overtly articulated pseudosign and the visual probe was the same pseudosign. This preliminary result (5 deaf signers) suggests that imagined (covert) signing accesses visual representations of pseudosigns, while overt articulation does not, perhaps due to the nature of visual feedback in sign language.